In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the knowledge of "how to understand the complexity of web algorithm". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Computational law (Algorithmics) is the science of designing and studying algorithms. Its history is much longer than that of computer science, but today, computational law is practiced almost entirely by computer scientists.
Arithmetic is a very extensive field, which requires a lot of mathematical knowledge. Of course, not all computer scientists need to be gifted mathematicians. From an algorithmic point of view, the problems faced by most programmers are actually very simple.
But sometimes we need to achieve something more complex. In this case, the basic knowledge of algorithms can be very useful. We don't ask you to invent a revolutionary new algorithm and give specific proof of its complexity, but in order to accurately use the algorithms found on the network or in software libraries, it is necessary to receive "basic training".
Knowing the algorithm will make you more efficient, better understand the problem you're trying to solve, and won't write irregular code: some code, although it works, is unreasonable from an algorithm point of view. An inexperienced programmer may directly use these substandard algorithms (he thinks, "the code works, so there should be no problem"), but because you know the algorithms, you can quickly find problems with the code. and rewrite a much better version.
After hearing this, you may be a little eager to try. Here are two simple research questions on algorithm complexity, which can give you a more accurate understanding of the role of algorithm complexity.
Looking for the largest and smallest elements
Question 1: there is a list of positive integers and we want to find the largest integer in the list.
The classical solution to this problem is as follows: walk through this list and keep the largest element found so far, calling it the "current maximum".
We can describe this algorithm as:
At first, the current maximum is equal to 0. We compare each element in the list with the current maximum, and if the element currently traversed is larger than the current maximum, set the current maximum to the current element. After traversing the entire list, the current maximum really deserves it.
Here is an implementation of this algorithm, which is implemented in "the best programming language in the world" PHP (of course, you can also implement it in other programming languages):
We can quickly verify that this algorithm is correct: we just need to make sure that when this algorithm is executed, the "current maximum" is always equal to the largest value in the list elements traversed so far.
We also notice that the algorithm is "over" and does not fall into an infinite loop: the algorithm traverses the entire list and then stops. This may seem like an unimportant detail, but there are actually some programming languages that can represent a list of infinite elements: in this case, our algorithm is incorrect.
Now let's study the complexity of this algorithm. What operations should we consider? Obviously, most of the work is to compare the current element to the "current maximum" (after all, the initialization of the "current maximum" current_max (initialized to 0) does not take much running time, so we calculate the number of "compare operations" as the operands of this algorithm.
What parameters does the execution time of the algorithm depend on? As you can imagine, the execution time does not depend on the value of each element in the list (here, we assume that the comparison time of two integers is constant, regardless of their values). Therefore, we quantify the input with the length N of the element list.
For a list of N elements, we compare N times: each element is compared to the current maximum. Therefore, the time complexity of the algorithm is O (N): its execution time is linear and proportional to the number of elements N in the list.
So, what is the space complexity of this algorithm? This algorithm uses a list of elements that take up a certain amount of memory space. However, this list already exists before we find its largest element, and the memory space it occupies is not allocated by our algorithm, so we say that the number of elements in this list N will not be taken into account in the calculation of the space complexity of the algorithm, we only consider the memory directly requested by our algorithm.
The memory space directly requested by our algorithm is almost negligible, because at most it takes up a temporary variable (current_max) to store the "current maximum". Therefore, the memory space consumed by our algorithm does not depend on the length of the list: (we mark the space complexity as O (1), which means it does not depend on N).
For our algorithm, there is only one small detail to note: if our list is empty, the maximum value returned will be 0. It is obviously not true to say that the maximum value of an empty list is 0: in some cases, if the list is empty, it is best to return an error.
So we can improve our algorithm: instead of assigning an initial value of 0 to the current maximum, we use the first element of the list (if the list is empty, an error is returned) as the initial value of the current maximum. Then, let's start with the second element.
The improved algorithm performs NMel 1 comparison (because we don't have to compare the first element with itself). However, this does not change the time complexity of the algorithm: the time difference between N and NMel 1 does not depend on N, it is constant, so we can ignore it: both algorithms have the same time complexity, and they are both time linear (time complexity is O (N)).
Finally, we notice that the second algorithm also applies to negative numbers (if all elements of the list are negative, the first algorithm returns 0, which is obviously incorrect). Therefore, the improved second algorithm is more general and better.
Of course, the algorithm for finding the minimum value in the list is similar to finding the maximum value, so we won't dwell on it.
Looking for non-repetitive elements
Now let's look at the second question.
Question 2: there is a list 1 that contains duplicates (elements that occur multiple times): we want to build a list 2 that contains the same elements as listing 1, but each element in listing 2 only repeats once.
For example, listing 1 has the following elements:
AABCDBCA
Then listing 2 will contain the following elements:
ABCD
Have you come up with an algorithm to solve this problem? Please think for yourself before you read my solution.
My solution
My algorithm is as follows:
For a given list L that contains repeating elements, we need to build a new list U (take the first letter of the English Unique ("unique"). The list U is empty at first, and we need to fill it with elements. We iterate through list L, and for each element in list L, let's make sure it exists in list U (we can use an algorithm similar to the previous algorithm for finding the largest element, which is, after all, comparing elements one by one). If the element traversed in list L is not already in list U, it is added to list U; if it already exists in list U, it is not added. After traversing list L, list U has the same elements as list L, but these elements are not repeated.
Exercise: use your favorite programming language to implement the above algorithm for extracting non-repeating elements from the list.
Complexity
What is the complexity of this algorithm? If you fully understand the complexity of the previous algorithm for finding the maximum value of the list, this should be easy for you.
For each element in a given list L, we perform the operation of traversing the list U, so the number of operands performed is related to the number of elements contained in the list U.
But the problem is that the size of list U changes as you traverse a given list L because we add elements to list U. When we iterate through the first element in list L, the list U is still empty (so we don't perform any comparisons); when we traverse to the second element of list L, the list U has 1 element, so we're going to do another comparison.
But when we traverse to the third element in list L, we become less sure: if the first two elements in list L are different, they are added to U, in which case we perform two comparison operations (compare the third element in list L with the two elements in list U, respectively). If the first two elements are the same, then the second element in list L is not added to list U, only one comparison is performed.
As we have already said in our course, complexity calculations need to be considered in the "worst-case scenario" (worst case): that is, the complexity when the maximum number of operations are performed. Therefore, we will assume that all elements of a given list L are different.
In the worst-case scenario, we add all the elements of a given list L to list U one by one. Assuming that there are N elements in a given list L, when traversing to the Nth element of the given list L, we have already added (Nmur1) elements to the list U, so we have to do (Nmur1) comparison operations at this time.
So the total comparison Operand we have to do is 0 + 1 + 2 +. + (NMur1). There are fewer operations at the beginning, but the more you do later (it's a bit like life, you have less responsibility at birth, slowly you have more and more responsibilities, and you have more and more things to deal with, but it also shows that you are growing up. After all, "those who can do more work").
When the above series of numbers are added together, the total Operand is N * (N-1) / 2 (this is not difficult, it is the summation formula of the arithmetic sequence in mathematics). Because we are considering the case where N is very large in computational complexity, the above results can be approximately equal to N * N / 2, that is, N2 / 2 operations.
Therefore, our algorithm has a time complexity of O (N2) (we remove the constant factor 1 hand 2). We can also call O (N2) "quadratic / square" complexity (as we call O (N) has "linear" complexity).
Compared with the previous algorithm for finding the largest element, this algorithm is slower (with higher time complexity). It also has higher space complexity: we built a list U that didn't exist in the first place (so we applied for memory space).
In the worst case, list U also has as many elements as a given list L: so space is allocated for N elements, which makes the space complexity O (N). Previously, the space complexity of the algorithm for finding the largest element is constant (O (1)), but now the space complexity of this algorithm is linear (O (N)).
The algorithm only needs to compare elements, so the elements being manipulated don't have to be integers: we can use the same algorithm to eliminate duplicate words in the word list, duplicate floating-point numbers, and so on. Therefore, many algorithms are independent of the specific type of element used.
Looking for non-repetitive elements: another approach
In fact, there is another algorithm for finding non-repeating elements (as you may have thought of): we can first sort the elements in a given list L so that all repeating elements are adjacent, so that it will be easy to exclude repeating elements.
For example, a given list L initially looks like this:
AABCDBCA
We can sort list L before building list U so that it looks like this:
AAABBCCD
In this way, our algorithm for building list U is simple.
The algorithm is as follows: just iterate through the sorted list L and remember the element you traversed last time. If the current element is the same as the previous element, then this element is duplicated and do not include it in the list U of non-repeating elements.
If the repeated elements are not adjacent to each other, the above algorithm is no longer valid. So we have to sort the list first.
What is the time complexity of this new algorithm? Deduplication is done in a single traversal of the list, so it is linear (O (N)). But because we have to sort the list first, the first step of sorting must also be taken into account in the total complexity of the new algorithm.
Of course, it's a little too early to mention the sorting of lists, because we won't talk about sorting algorithms until later in the course.
Although we haven't learned about sorting algorithms and their complexity yet, I'd like to talk about the complexity of this new algorithm.
It turns out that the complexity of this algorithm depends on the complexity of sorting: because sorting basically performs N2 operations, which is far more than the N operations when we build the list U later, so the overall complexity is O (N2).
However, there are higher-end sorting algorithms that still perform more than N operations, but much less than N2.
We will learn about sorting algorithms later in the course. For now, all you need to know is that the new algorithm with one more step of sorting is more efficient and "advanced" than the old one.
Searching for specified elements in a list is very similar to finding the largest / smallest element in a list, both of which are linear in time (the time complexity of the algorithm is O (N)) and the space complexity is O (1).
The algorithm for eliminating duplicate elements in the list is more complex because the simplest algorithm has a square time complexity (O (N2)) and a linear space complexity (O (N)).
I hope these more specific studies will convince you that algorithms and algorithmic complexity are useful. By now you should also be used to the basic concepts of "algorithm", "time complexity" and "space complexity".
This is the end of the content of "how to understand the complexity of web algorithm". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.