Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Python for Deep Learning

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

In this issue, the editor will bring you about how to use Python for in-depth learning. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.

The main reason behind deep learning is that artificial intelligence should draw inspiration from the human brain. Let's use a small example to introduce deep learning without dead ends.

Human brain simulation

The main reason behind deep learning is that artificial intelligence should draw inspiration from the human brain. This view leads to the term "neural network". The human brain contains billions of neurons with tens of thousands of connections between them. In many cases, deep learning algorithms are similar to the human brain, because both the human brain and deep learning models have a large number of compilation units (neurons), which are not very smart when they are independent. but they become smart when they interact with each other.

I think people need to understand that deep learning is making a lot of things better behind the scenes. Deep learning has been applied to Google search and image search, where you can search for words like "hug" to get the corresponding images. -Jeffrey Hinton

Neuron

The basic construction module of neural network is artificial neuron, which imitates the neurons of human brain. These neurons are simple and powerful computing units that have weighted input signals and use activation functions to generate output signals. These neurons are distributed in several layers of the neural network.

Cdn.com/81bb8545cf2cbb1523ec5cfb667831ce4aab2f50.png ">

Inputs input outputs output weights weight activation activation

What is the working principle of artificial neural network?

Deep learning consists of artificial neural networks, which simulate similar networks in the human brain. As the data passes through the artificial network, each layer processes one aspect of the data, filtering out outliers, identifying familiar entities, and producing final output.

Input layer: this layer consists of neurons that only receive input information and pass it on to other layers. The number of layers in the input layer should be equal to the number of attributes or features in the dataset. Output layer: the output layer is predictable, which mainly depends on the type of model you build. Hidden layer: the hidden layer is between the input layer and the output layer, based on the model type. The hidden layer contains a large number of neurons. Neurons in the hidden layer first transform the input information and then transmit them out. As the network is trained, the weights are updated to make it more forward-looking.

The weight of neurons

Weight refers to the strength or magnitude of the connection between two neurons. If you are familiar with linear regression, you can compare the input weight to the coefficient we use in the regression equation. The weight is usually initialized to a small random value, such as the value 0-1.

Feedforward depth network

Feedforward supervised neural network has been the first and most successful learning algorithm. The network can also be called a depth network, a multilayer perceptron (MLP) or a simple neural network, and illustrates the original architecture with a single hidden layer. Each neuron is associated with another neuron by a certain weight.

The network processes the input information forward, activates the neurons, and finally produces the output value. In this network, this is called forward delivery.

Input layer input layer hidden layer output layer

Activation function

The activation function is the mapping of the weighted input to the output of the neuron. It is called the activation function or transfer function because it controls the initial value of the activated neuron and the strength of the output signal.

Expressed mathematically as:

We have many activation functions, among which rectifier linear unit functions, hyperbolic tangent functions and solfPlus functions are the most commonly used.

The quick check table of the activation function is as follows:

Back propagation

In the network, we compare the predicted value with the expected output value and use the function to calculate its error. The error is then sent back to the network, one layer at a time, and the weight is updated by eliminating the resulting error. This clever mathematical method is the back propagation algorithm. This step is repeated in all samples of the training data, and the network update of the entire training data set is called a period of time. A network can be trained for tens, hundreds, or thousands of periods.

Prediction error prediction error

Cost function and gradient decline

The cost function measures how good the neural network is for a given training input and expected output. This function may depend on attributes such as weight, deviation, and so on.

The cost function is single-valued, not a vector, because it evaluates the performance of the neural network as a whole. When using the gradient descent optimal algorithm, the weight will be updated incrementally after each period.

Compatible cost function

Expressed mathematically as the sum of squares of differences:

Target target value output output value

The size and direction of the weight update are calculated by taking steps in the reverse direction of the cost gradient.

Where η is the learning rate.

Where Δ w is a vector containing the weight update of each weight coefficient w, which is calculated as follows:

Target target value output output value

The cost function of a single coefficient will be taken into account in the chart

Initial weight initial weight gradient gradient global cost minimum cost minimum

The gradient decline is calculated until the derivative reaches the minimum error, and each step depends on the steepness of the slope (gradient).

Multilayer perceptron (forward propagation)

This kind of network consists of multiple layers of neurons, which are usually connected to each other in a forward-fed manner. Each neuron in the first layer can be directly connected to the neurons in the subsequent layer. In many applications, the units of these networks use S-shaped functions or rectified linear unit (rectified linear activation) functions as activation functions.

Now think about it to find out the problem of the number of processing times, with a given account and family member as input.

To solve this problem, first of all, we need to create a forward propagation neural network. Our input layer will be the number of family members and accounts, the implicit layer will be 1, and the output layer will be the number of processing times.

Take the given weight from the input layer to the output layer as input: the number of family members is 2 and the number of accounts is 3.

You will now use forward propagation to calculate the values of the hidden layer (iMagnej) and the output layer (k) using the following steps.

Steps:

1, multiplication-add method.

2, dot product (enter * weight).

3, forward propagation of one data point at a time.

4. The output is the prediction of the data point.

The value of I will be calculated from the input value and weight corresponding to the connected neurons.

I = (2 * 1) + (3 * 1) → I = 5

Similarly, j = (2 *-1) + (3 * 1) → j = 1

K = (5 * 2) + (1 *-1) → k = 9

Solution to the problem of Multi-layer Perceptron in Python

The use of activation function

In order to make the neural network achieve its maximum prediction ability, we need to apply an activation function in the hidden layer to capture nonlinearity. We apply activation functions at the input and output layers by substituting values into the equation.

Here we use rectifier linear activation (ReLU):

Developing the first Neural Network with Keras

About Keras:

Keras is an advanced neural network application programming interface, written by Python, can be built on TensorFlow,CNTK, or Theano.

Use PIP to install Keras on the device and run the following instructions.

Steps to execute a deep learning program in keras

1, load data

2, create a model

3, compilation model

4, fitting model

5. Evaluation model

Develop Keras model

The fully connected layer is represented by Dense. We can specify the number of neurons in the layer as the first parameter, the initialization method as the second parameter, that is, the initialization parameter, and use the activation parameter to determine the activation function. Now that the model has been created, we can compile it. We compile the model with an efficient digital library in the underlying library (also known as the back end), which can be Theano or TensorFlow. So far, we have completed the creation of the model and compilation of the model, which is ready for effective calculation. You can now run the model on the PIMA data. We can call the fitting function f () on the model to train or fit the model on the data.

Let's start with the program in KERAS.

The neural network has been trained for 150 periods and returns the exact value.

The above is the editor for you to share how to use Python for in-depth learning, if you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report