In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the example analysis of the basic principles of Python deep learning neural network, which has a certain reference value, and interested friends can refer to it. I hope you will gain a lot after reading this article.
Neural network
Gradient descent method
Before we take a closer look at the gradient descent algorithm, let's take a look at some related concepts.
1. Step size (Learning rate): the step size determines the length of each step along the negative direction of the gradient during the iterative process of gradient descent. Using the example of going downhill above, the step length is the length of the step along the steepest and easiest position to go downhill at the current step.
two。 Feature: refers to the input part of a sample, such as two single-feature samples (x (0), y (0), (x (1), y (1)) (x (0), y (0)), (x (1), y (1)), then the first sample feature is x (0) x (0), and the first sample output is y (0) y (0).
3. Hypothetical function (hypothesis function): in supervised learning, the hypothetical function used to fit the input sample is written as h θ (x) h θ (x). For example, for m samples of a single feature (x (I), y (I)) (iatro1 1xh 2) (m) (x (I), y (I)) (iLim), the fitting function can be used as follows: h θ (x) = θ 0 + θ θ (x) = θ 0 + θ 1x.
4. Loss function (loss function): in order to evaluate the quality of model fitting, loss function is usually used to measure the degree of fitting. The minimization of the loss function means that the fitting degree is the best, and the corresponding model parameters are the optimal parameters. In linear regression, the loss function is usually the square of the difference between the sample output and the hypothesis function. For example, for m samples (xi,yi) (iMagel 2m) (xi,yi), linear regression is used, and the loss function is:
J (θ 0, θ 1) = ∑ ionization 1m (h θ (xi) − yi) 2J (θ 0, θ 1) = ∑ ionization 1m (h θ (xi) − yi) 2
Where xixi represents the I th sample feature, yiyi represents the corresponding output of the I th sample, and h θ (xi) h θ (xi) is a hypothetical function.
Thank you for reading this article carefully. I hope the article "sample Analysis of the basic principles of Python Deep Learning Neural Network" shared by the editor will be helpful to you. At the same time, I also hope that you will support and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.