Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of algorithm in Python

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

Editor to share with you the example analysis of the algorithm in Python, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

1. Design requirements of algorithm

The main goal of algorithm analysis is to compare algorithms in terms of running time and memory space consumption.

1.1 criteria for algorithm evaluation

A good algorithm should first of all be "correct", which can terminate each input instance and give the correct result, and can correctly solve the given calculation problem. In addition, the following aspects need to be considered:

Efficiency: the time it takes to execute the algorithm

Low storage capacity: the storage space consumed to execute the algorithm, mainly considering the secondary storage space

Readability: the algorithm should be easy to understand, easy to code, easy to debug, etc.

1.2 principles for algorithm selection

It is difficult for an algorithm to meet the requirements of small storage space, short running time and other performance at the same time. In many cases, we have to make a choice about performance. When we choose an algorithm, we usually follow the following principles:

If the program is used less often, the algorithm is simple and easy to understand.

For programs that are used repeatedly, we should choose a fast algorithm as far as possible.

If the problem to be solved has a large amount of data and the storage space of the machine is small, then the corresponding algorithm mainly considers how to save space.

two。 Algorithm efficiency analysis

The efficiency analysis of the algorithm is analyzed and compared according to the time required for the execution of the algorithm, which is also called the execution time or running time of the algorithm. One way to measure the execution time of the algorithm is to do benchmark analysis, which is a statistical method after the event, which uses absolute units of time to record the actual time consumed by the program to calculate the results. In Python, you can use the time function of the time module to record the start time and end time of the program, then calculate the difference, and you can get the algorithm execution time in seconds.

Taking the calculation of the n term of the Fibonacci series as an example (the Fibonacci series starts from the third term, each term is equal to the sum of the first two terms), the time function is called before and after calculating the n item of the Fibonacci series to calculate the execution time:

Import timedef fibo (n): start = time.time () a, b = 1,1 if n > 2: for i in range (results 2): a, b = b, a + b end = time.time () running = end-start return b, runningfor i in range (5): results = fibo (100000) print ('It takes {: .8f} seconds to calculate the 10000th item of Fibonacci sequence'.format (results [1]))

The result of code execution is as follows:

It takes 0.08275080 seconds to calculate the 10000th item of Fibonacci sequence

It takes 0.08277822 seconds to calculate the 10000th item of Fibonacci sequence

It takes 0.08176851 seconds to calculate the 10000th item of Fibonacci sequence

It takes 0.08178067 seconds to calculate the 10000th item of Fibonacci sequence

It takes 0.08081150 seconds to calculate the 10000th item of Fibonacci sequence

However, this method calculates the actual time to execute the algorithm, and has two obvious defects: 1) the program based on the algorithm must be run first; 2) it depends on the software and hardware environment such as specific computer, compiler and programming language. It is easy to cover up the advantages and disadvantages of the algorithm itself. Therefore, we hope to find an indicator independent of the program or computer, which can be used to compare algorithms under different implementations.

2.1 Big O representation

In order to get rid of the factors related to computer hardware and software, we need a method of analysis and estimation in advance. It can be considered that the "running workload" of a particular algorithm depends on the size of the problem, or it is a function of the size of the problem, so we need to quantify the operation or steps of the algorithm. An algorithm is composed of control structure and basic operations, so the execution time of the algorithm can be described as the basic operands that need to be repeated to solve the problem. It is important to note that determining the appropriate basic operation depends on different algorithms. For example, when calculating the nth term of the Fibonacci series, the assignment statement is a basic operation, while when calculating matrix multiplication, multiplication is its basic operation.

In the fibo function in the previous section, the execution time of the whole algorithm is proportional to the number of times n of the basic operation (assignment) is repeated, specifically, 1 plus 2 assignment statements, if defined as a function that can be expressed as T (n) = n − 1, where n is a positive integer greater than 2. N is often used to represent the size of the problem. We can use a function f (n) of the problem size n to represent the number of times the basic operations in the algorithm are repeated. The time measure of the algorithm can be expressed as follows:

T (n) = O (f (n))

With the increase of the problem size n, one part of the T (n) function will grow faster than the rest. This part plays a decisive role in the comparison between algorithms, and the fastest growing part of T (n) is also called an order of magnitude function. The growth rate of the execution time of the algorithm is the same as that of f (n), which is called the asymptotic time complexity (asymptotic time complexity) of the algorithm. The order of magnitude (order of magnitude) is often referred to as the big O notation or the big O representation.

Through the above analysis, we can describe the asymptotic complexity rule of the algorithm as follows:

If the run time is a polynomial sum, then only the items with the fastest growth are retained and other items are removed.

If the remaining term is a product, then remove all the constants.

Suppose that the number of basic steps of an algorithm is T (n) = 3n2+50n+2000, when n nn is very small, 2000 has the greatest influence on the function, but with the growth of n, N2 will gradually become more important, so that the other two terms and the coefficient 3 of N2 can be ignored, so it can be said that the order of magnitude of T (n) is N2 or O (N2).

The performance of the algorithm sometimes depends not only on the scale of the problem, but also on the input value of the algorithm. the situation in which the input makes the algorithm run the slowest is called the worst case, and the input makes the algorithm run the fastest is called the best case. in the case of random input, the performance of the algorithm is between the two extremes, which is called the average case.

2.2 Common algorithm complexity

The following table lists some common examples of large O notation:

2.2.1 constant complexity

Constant complexity means that the scale of the progressive complexity domain input of the algorithm is independent, such as finding the length of the list belongs to constant complexity. Constant complexity has nothing to do with whether the code contains loops, such as printing "Hello world" 100 times, which has nothing to do with the size of the input, so it is also constant complexity.

2.2.2 logarithmic complexity

Logarithmic complexity indicates that the growth rate of a function is at least the logarithm of the input size. When we talk about logarithmic complexity, we don't care about the base of the logarithm. This is because we can use the formula of changing the base to multiply the logarithm of the original base by one constant into another base:

Where an aa and b bb are constant. For example, the following code converts a positive integer to a string:

Def int_to_str (num): digits = "0123456789" result =''if num = = 0: result ='0' else: while num > 0: result = digits [num% 10] + result num = num / / 10 return result

The above code includes only one loop and no other functions are called, so we just need to find out the number of iterations in the loop-the number of integer divisions required before num is 0 log10n. So the complexity of the function int_to_str is O (logn).

2.2.3 Linear complexity

Linear complexity is always common in sequence data types in lists because algorithms usually need to traverse every element in the processing sequence. For example, add a constant of 10 to each element in the list:

Def add_constant (list_o): for i in range (len (list_o)): list_ o [I] + = 10

The complexity of this function is linearly related to the length of the list, which is O (n).

2.2.4 Linear logarithmic complexity

Linear logarithmic complexity is the product of two items, each of which depends on the size of the input, such as converting each positive integer in the list to a string. The complexity of many practical algorithms is logarithmic linear.

2.2.5 polynomial complexity

The growth rate of polynomial complexity is the k kk power of the input size, of which the most common is the square complexity, such as finding the intersection of list list_a and list_b:

Def intersect (list_a, list_b): # part I temp = [] for i in list_a: for j in list_b: if I = j: temp.append (I) break # part II result = [] for i in temp: if i not in result: result.append (I) return result

The complexity of the first part of the intersect function is obviously O (len (list_a)) ∗ O (len (list_b)). The second part of the code is used to remove the repetitive elements from the list of results obtained in the first part, although it contains only one loop statement, but the test condition if i not in result needs to check each element in the result, so the complexity of the second part is O (len (temp)) ∗ O (len (result)). The length of tmp and result depends on the smaller of list_a and list_b, which can be ignored according to the rule of progressive complexity. In the end, the complexity of the intersect function is O (N2).

2.2.6 exponential complexity

The solution time of the exponential complexity algorithm increases exponentially with the input size. In the following example, because 1 moves the num bit to the left to get end, end is actually equal to 2num, so the 2num secondary addition is calculated in the loop with a time complexity of O (2n).

Def calculate (num): result = 0 end = 1

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report