Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Case Analysis of Python Deep Learning algorithm

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "case Analysis of Python Deep Learning algorithm". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "Python deep learning algorithm example analysis" it!

Least square method

All deep learning algorithms start with the following mathematical formula (which I have converted to Python code)

Python

# y = mx + b

# m is slope, b is y-intercept

Def compute_error_for_line_given_points (b, m, coordinates):

TotalError = 0

For i in range (0, len (coordinates)):

X = coordinates [I] [0]

Y = coordinates [I] [1]

TotalError + = (y-(m * x + b)) * * 2

Return totalError / float (len (coordinates))

# example

Compute_error_for_line_given_points (1, 2, [[3rem 6], [6je 9], [12je 18]])

The least square method was first proposed by Adrien-Marie Legendre in 1805 (1805, Legendre). The Parisian mathematician is also famous for his measuring instruments. He is obsessed with predicting the orientation of comets and is constantly looking for an algorithm that can calculate the trajectory of comets based on their historical azimuth data.

He tried many algorithms, tried and made mistakes over and over again, and finally found an algorithm that matched the results. Legendre's algorithm is to first predict the future orientation of the comet, and then calculate the square of the error. The ultimate goal is to modify the predicted value to reduce the sum of the square of the error. And this is the basic idea of linear regression.

Readers can run the above code in Jupyter notebook to deepen their understanding of this algorithm. M is the coefficient, b is the predicted constant term, and coordinates is the position of the comet. The goal is to find the right m and b to make the error as small as possible.

This is the core idea of deep learning: given the input value and the expected output value, and then look for the correlation between the two.

Gradient decline

Legendre is a very time-consuming way to reduce the error rate by trying manually. Dutch prize winner Peter Debye formally proposed a way to simplify the process a century later (1909).

Suppose Legendre's algorithm needs to consider a parameter-- we call it X. The Y axis represents the error value for each X. Legendre's algorithm is to find the X that minimizes the error. In the following figure, we can see that when X = 1.1, the error Y is minimized.

Peter Debye noticed that the slope on the left side of the lowest point is negative, while the other side is positive. Therefore, if you know the slope of any given X, you can find the minimum point of Y.

This is the basic idea of gradient descent algorithm. Gradient descent algorithm is used in almost all deep learning models.

To implement this algorithm, we assume that the error function is Error = x ^ 5-2x ^ 3-2. To get the slope of any given X, we need to derive it, that is, 5x ^ 4-6x ^ 2:

If you need to review the derivative, please watch the video of Khan Academy.

Let's use Python to implement the Debye algorithm:

Python

Current_x = 0.5 # the algorithm starts at x = 0.5

Learning_rate = 0.01 # step size multiplier

Num_iterations = 60 # the number of times to train the function

# the derivative of the error function (x = the power of 4 or x ^ 4)

Def slope_at_given_x_value (x):

Return 5 * x # 4-6 * x # 2

# Move X to the right or left depending on the slope of the error function

For i in range (num_iterations):

Previous_x = current_x

Current_x + =-learning_rate * slope_at_given_x_value (previous_x)

Print (previous_x)

Print ("The local minimum occurs at f" current_x)

The trick here is learning_rate. We approach the lowest point by walking in the opposite direction of the slope. In addition, the closer to the lowest point, the smaller the slope. So when the slope approaches 00:00, the decline in each step becomes smaller and smaller.

Num_iterations is the number of iterations you expect before you reach the minimum. You can train your intuition about the gradient descent algorithm by debugging this parameter.

Linear regression.

The least square method combined with the gradient descent algorithm is a complete linear regression process. In the 1950s and 1960s, a group of experimental economists realized these ideas on early computers. This process is achieved through physical signing-in, a real manual software program. It takes several days to prepare these punch cards, while a regression analysis by computer takes up to 24 hours.

Here is an example of linear regression with Python (we don't need to do this on the punch machine):

Python

# Price of wheat/kg and the average price of bread

Wheat_and_bread = [[0.5, 5], [0.6, 5.5], [0.8, 6], [1.1, 6.8], [1.4]]

Def step_gradient (b_current, m_current, points, learningRate):

B_gradient = 0

M_gradient = 0

N = float (len (points))

For i in range (0, len (points)):

X = points [I] [0]

Y = points [I] [1]

B_gradient + =-(2mp N) * (y-((m_current * x) + b_current))

M_gradient + =-(2mp N) * x * (y-((m_current * x) + b_current)

New_b = b_current-(learningRate * b_gradient)

New_m = m_current-(learningRate * m_gradient)

Return [new_b, new_m]

Def gradient_descent_runner (points, starting_b, starting_m, learning_rate, num_iterations):

B = starting_b

M = starting_m

For i in range (num_iterations):

B, m = step_gradient (b, m, points, learning_rate)

Return [b, m]

Gradient_descent_runner (wheat_and_bread, 1,1,0.01,100)

Linear regression itself does not introduce anything new. However, we need to think about how to apply the gradient descent algorithm to the error function. Run the code and use this linear regression simulator to deepen your understanding.

Perceptual machine

Next, let's meet Frank Rosenblatt. This is a guy who dissects a mouse brain during the day and looks for signs of extraterrestrial life at night. In 1958, he invented a machine that mimics neurons (1958, Rosenblatt) and made the headline of the New York Times: "New Navy Device Learns By Doing."

If you show Rosenblatt's machine 50 sets of images marked on the left and right sides, it can distinguish between two images (the location of the mark) without pre-programming. The public is shocked by this machine that may really have the ability to learn.

As shown in the figure above, each training cycle starts with the input data on the left. Add an initial random weight to all input data. And then add them up. If the sum is negative, the output is 0, otherwise the output is 1.

If the prediction is correct, the weight in the cycle is not changed. If the prediction result is wrong, the weight can be adjusted accordingly by multiplying the error by the learning rate.

We use classical OR logic to run perceptrons.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report