In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
It is not easy to give a precise definition of the word "algorithm". There are some synonyms with similar meanings, that is, other nouns that give almost the same thing. For example, "law", "skill", "program", "method" and so on are all synonyms. Some examples can also be given, such as long multiplication, which is the vertical multiplication learned by primary school students to multiply two positive integers. However, although informal explanations and appropriate examples give a good sense of what an algorithm is, the idea hidden in the word algorithm has gone through a long process of evolution. It was not until the 20th century that a satisfactory formal definition was obtained, and the concept of algorithm is still evolving.
When abacus and arithmetic experts return to the example of multiplication, one thing is obvious: how to multiply two numbers? The method of expressing these numbers greatly affects the specific method of multiplication. To figure this out, try multiplying two Roman numerals CXLVII and XXIX, but don't first translate them into the equivalent decimal numbers 147 and 29. This is difficult to understand, and it takes a lot of time to calculate later, which explains why the materials on multiplication in the Roman Empire are so fragmented to this day. The numbering system can be "cumulative", such as Roman notation:
C stands for 100. X means 10. L means 50, but X put to the left of L means to subtract X from L, so 40 Magi V means 5 Magi means 1, and two I put on the right side of V, which means to add them to V, so it's 7. Adding up all the above explanations, it is 147 of Roman mathematics.
The counting system can also be rounded, as we use today. If it is carry, one or more substrates can be used.
For a long time, calculations can be done using a computing tool called abacus. These calculation tools can represent the carry number under a certain base. For example, if you use 10 as the base, a marker can represent 1 unit, or 10. Or 100, etc., depending on which row or column it is placed in. Four arithmetic operations can be performed by moving these markers according to precise rules. Chinese abacus is a kind of abacus.
By the 12th century, after Arabic mathematical works were translated into Latin, the decimal system became popular in Europe. This carry system is particularly suitable for arithmetic operations and leads to many new calculation methods. These methods are commonly known as algorithms (algoritmus), which are different from calculating with markers on the abacus.
Although digital symbols, that is, numbers, came from Indian practice and were later known to Arabs, these numbers are now called Arabic digitals. the origin of algorithm is Arabic, which is a variant of the name of the Arab mathematician Alvarazimi. Hua Lazimi is the author of the oldest known math book, called al-Kitab al-mukhtasar f hisib al-jabr wod ll-mugi balo, in which al-jabr later became the word "algebra".
Finiteness We have seen that the word "algorithm" in the Middle Ages refers to a computing program based on the decimal representation of integers. But in the 17th century, in the encyclopedia edited by D'Alembert, the word algorithm was given a broader meaning, not only for arithmetic, but also for algebraic methods and other computational programs. such as "integral algorithm", "sinusoidal algorithm" and so on.
The word algorithm is gradually used to refer to the calculation program of any system with precise rules. Finally, with the increasing role of computers, the importance of finiteness is fully recognized, and the essential requirement is that the process will stop after a limited time and give the result. So we get the following simple definition:
An algorithm is a set of finite rules that operate on a limited amount of data and produce results after a finite number of steps.
Note that there is always emphasis on finiteness, finiteness in writing the algorithm, and finiteness in the execution of the algorithm.
The above statement is not a mathematical definition in the classical sense. We will see that it is important to further formalize it. But we are satisfied with this definition for the time being, and let's take a look at some classic examples of algorithms in mathematics.
The three historical example algorithms have a feature that we haven't mentioned yet: iteration, that is, the repeated execution of simple programs. To see the importance of iteration, let's once again look at the example of long multiplication, which is suitable for positive integers of any size. The larger the number, the longer the program. But the most important thing is that the method is "the same". If you multiply two three digits, you will multiply two 137 digits without having to learn any new principles. the reason is that the method of long multiplication contains a large number of carefully constructed much smaller tasks, such as the ninety-nine table of multiplying two single digits. We will see that iterations play an important role in the algorithms we are going to discuss.
Euclid algorithm: iteration
The Euclid algorithm is the best and most commonly used example to illustrate the nature of the algorithm. This algorithm can be traced back to the 3rd century BC. Euclid has to use it to calculate the maximum common divisor (gcd) of two positive integers.
When we first encounter the greatest common divisor of two positive integers an and b, it is defined as a positive integer and is a factor of both an and b. However, it is better to define it as the only integer d with the following two properties for many purposes. These two properties are:
First of all, d is a factor of an and b
Secondly, if c is another factor of an and b, then d is divisible by c.
The first two propositions of Euclid's geometric primitive volume VII give the method of finding d, of which the first proposition is as follows: "given two unequal numbers, constantly subtract the smaller number from the larger number, if the remaining digits, the first number cannot be measured until the remaining number is a unit, when the original number is coprime." In other words, if you subtract from each other to get the number 1, then gcd is 1. At this point, it is said that the original two numbers are prime (or prime to each other).
Now let's describe the Euclid algorithm in general, which is based on the following two observations:
(1) if an is b, then the gcd of an and b is b (or a).
(2) d is the common divisor of an and b if and only if it is also the common divisor of aMub and b.
Now let's ask for gcd of an and b, and set a ≥ b. If gcd is b, then observe (1) and tell us that it is b. If not, observation (2) tells us that if we ask the gcd of amurb and b, we will get the same answer. Now let axi1 be the larger one of aMub and b, and bicon1 is the smaller of them, and then find the gcd of two numbers. However, now the larger of the two numbers, that is, axi1, is smaller than the larger of the original two numbers, that is, a. In this way, we can repeat the above procedure again: if a_1=b_1, then the gcd of aqum1 and bread1, that is, the gcd of an and b is bread1, if not, replace it with a_1-b_1, and then organize a_1-b_1 and bread1. In short, the larger one should be put in front and then continue. This is called "tossing and subtracting".
In order for this program to continue, another observation is needed, which is the following basic fact about positive integers, sometimes called the good order principle:
Strictly descending positive integer sequence aq0 > A1 > a2 >... Must be a finite sequence.
Because the above iteration happens to produce a strict descent sequence, the iteration must eventually stop, which means that there must be a_k=b_k at some point, and this common value is the gcd of an and b.
The flow chart of Euclidean algorithm Euclidean division usually has a slightly different statement of Euclidean algorithm. You can apply a more complex program called Euclidean division (that is, with residual division), which can greatly reduce the number of steps of the algorithm, which is also known as tossing and turning division. The basic fact of this program is that if an and b are two positive integers, then there must be unique integers Q and r, such that
The number Q is called quotient and r is called remainder. The above two points (1) and (2) will now be replaced by
If r is 0, then the gcd of an and b is b.
The gcd of an and b is the same as the gcd of b and r.
This time, in the first step, replace (b) with (b). If r ≠ 0, then do the second step and replace (b ≥ r) with (r ≥ 1). R 1 is the remainder obtained by dividing b by r, so r r > m > R1 > R2 r 0). Using the good order principle again, it is known that the program must stop after a finite step, and the last non-zero remainder is the gcd of an and b.
It is not difficult to see that the two methods are equivalent in terms of gcd, but there are great differences in terms of algorithms. For example, let's say axiom 103, 438, and baked 37. If you use toss and turn subtraction, subtract 37 from 103 438 again and again until the remaining difference is less than 37. This difference is the same as the remainder of 103438 divided by 37, and if you use the second method, you can get it at once. In this way, the reason for using the second method is that it is very inefficient to use repeated subtraction to find the remainder of division. The benefit of efficiency is very important in practice. The second method gives a polynomial time algorithm, while the first method requires a long exponential time.
Promotion
The Euclid algorithm can be extended to many other contexts, as long as there are concepts of addition, subtraction and multiplication. For example, it has a variant that can be used for Gaussian integer rings. It can also be used in polynomial rings with real coefficients (in this case, coefficients can also be used in any field), even in the form of a ring such as a + bi, where a memory b is a complex number of integers. However, there is a requirement to be able to define analogies with residual division, and with this, the algorithm is basically the same as the algorithm in the case of positive integers. For example, the following proposition: let An and B be two arbitrary polynomials, and B is not a zero polynomial, then there must be two polynomials Q and R. So that either the number of times of R is less than the number of times of B.
As Euclid mentioned in the original Geometry, this procedure can also be implemented for a logarithm (a) b when an and b are not necessarily integers. It is easy to verify, and the program will stop if and only if the ratio a / b is rational. This view leads to the concept of continued fractions. It was not specifically studied before the 17th century, but its ideological roots can be traced back to Archimedes.
Archimedes's method of calculating π: the ratio of the circumference of a circle to the diameter of a circle is a constant and has been recorded as π since the 18th century. Now let's take a look at how Archimedes got the classic approximation of this ratio in the 3rd century BC, 22 prime 7. If you make an inscribed regular polygon (whose vertices are all on the circle) and its circumscribed regular polygons (whose sides are all tangents of the circle), and then calculate the perimeter of these polygons, the lower and upper bounds of x will be obtained. because the perimeter of the circle must be larger than the perimeter of any inscribed polygon, but smaller than the perimeter of any circumscribed polygon. Archimedes starts with a regular hexagon and then doubles the number of sides of the polygon each time to get more and more accurate upper and lower bounds. Until he reached 96 edges, he got
Iteration is obviously involved in the process of approximation of π. But is it right to call it an algorithm? Strictly speaking, it is not an algorithm, no matter how many sides of the polygon, only get the approximate value of π, so this process is not limited. However, we do get an algorithm that can approximately calculate π to arbitrary accuracy. For example. If we want to get an accurate approximation of pi to dozens of bits, after a finite number of steps, this algorithm will give us the approximate value we want. The important thing is that this process is convergent. That is to say, the important thing is that the value obtained from the iteration can be arbitrarily close to π. The geometric source of this method can be used to prove this convergence, and in 1609 the Germans made 202 polygons (basically using Archimedes's method) to get an approximate value of π accurate to 35 decimal places.
However, there is an obvious difference between the algorithm of approximating π and Archimedes's algorithm of calculating the gcd of two positive integers. Algorithms such as Euclid are often called discrete algorithms, as opposed to numerical algorithms used to calculate non-integer values.
Newton-Raphson method: recursive formula around 1670, Newton proposed a method to find the root of the equation, and explained his method for the equation x ^ 3-2x-5=0. His explanation begins with the following observation: root x is approximately equal to 2. So he wrote the x equation 2p, and replaced the x of the original equation with 2p, and got an equation about p. The new equation is calculated to be
Because x is close to 2, p is very small, and he omits p ^ 3 and 6p ^ 2 to estimate p. This gives him the equation 10p-1=0 of p, that is, pendant 1max 10. This is not an exact solution, of course, but it gives Newton a new and better approximation of the root: xcow 2.1. Then Newton repeated this process, and after substituting xx 2.1 Q into the original equation, he gave an equation about Q, solved the equation approximately, and made his approximate solution accurate, so the estimate of Q was-0.0054, so the next approximate value of x was 2.0946.
Still, how can we be sure that the process will converge to x? Let's take a closer look at this method.
Tangent and convergence Newton's method can be geometrically explained by the image of the function f, although Newton himself did not do so. Every root x of f (x) = 0 corresponds to an intersection of the curve of the function yquadf (x) and the x axis. If we start with an approximate value an of the root x, and as we did above, let p be x-a, then a new function g (p) can be obtained by replacing x with aquip, that is to say, the origin (0) is effectively moved to (a). Then all the higher powers of p are omitted, leaving only constant and linear terms, so that the best linear approximation of the function g is obtained. Geometrically, this is the tangent of g at the point (0rem g (0)).
In this way, the approximate value for p is the intersection of the tangent and the x-axis of the function y at the point (0recom g (0)). Then add a to the Abscissa, that is, let the origin go back to the original (0Magne0), so that aquip gives a new approximation of the root of f. This is why Newton's method is called tangent method.
Newton's method can see from the above figure that if the intersection of the curve yardf (x) and the x-axis is at the point an and between the tangent at the point f (a) and the intersection of the x-axis (that is, the point where the Abscissa is axiom in the image above, that is, the approximate value of the root), then the second approximation (that is, a+p+q) must be better than the first approximation (here called a zero approximation of the root).
Going back to Newton's example, we can see that Newton's choice of axiom 2 is not the case mentioned above. But starting with the next approximation 2.1, this is the case for all of the following approximations. Geometrically, this is advantageous if the point (a) is above the x-axis, and the curve of yardf (x) intersects the x-axis at the convex part, or if the point (a-line f (a)) is below the x-axis, and the yardf (x) curve intersects the x-axis in the concave part.
The choice of initial approximation (that is, zero approximation) is obviously very important and raises subtle unthought-of questions. This is even clearer if we consider the complex roots of complex polynomials. Newton's method can easily adapt to this broader background. Let z be the complex root of a complex polynomial, and zonal 0 is the initial approximation, so Newton's method will give a sequence of zonal 0 ~ 0 ~ 1 ~ 2 ~ 2. It may or may not converge to z. We define the attraction region of the root z as such a set of initial approximation z 0 such that the obtained sequence does converge to z and denote this region as A (z). How to decide A (z)?
The first person to ask this question was Gloria, in 1879. He noticed that the problem is easy for quadratic polynomials, but it is difficult when the degree is 3 or more. For example, the attraction region of the root ±1 of the polynomial z ^ 2-1 is the two half-planes bounded by the vertical axis of the complex plane, but the corresponding attraction region of the three roots of z ^ 3-1 is a very complex set. These sets were described by Julia in 1918 and are now called fractal sets.
A new equation is generated at each stage of the recursive formula Newton method. But Raphson points out that it is actually not necessary. For a special example, he gives a single formula that can be used at each step. But his basic observation can be applied in general, deriving a general formula that can be used in every case, and this formula can be easily obtained by tangent interpretation. In fact, the tangent equation of the curve YBF (x) at x coordinate an is
The Abscissa of its intersection with the x-axis is a murf (a) / f'(a). The Newton-Raphson method we are talking about now refers to this formula. Let's start with an initial approach to axi0rooma and then use this recursive formula to get
In this way, we get an approximate sequence, in the complex case, that is, the above-mentioned zoning 0, 1, 1, 2,.
As an example, consider the function f (x) = x ^ 2-c. At this time, Newton's method gives a series of approximate values of the square root sign c of c, and the recursive formula is now
In the general formula above, just replace f with x ^ 2-c. Helen of Alexandria in the first century AD already knew this approximate square root method.
This article comes from the official account of Wechat: Lao Hu Shuo Science (ID:LaohuSci). Author: I am Lao Hu.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 291
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.