Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to learn CNN deeply in Python

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

What this article shares with you is about how to learn CNN deeply in Python. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article.

Overview of 1.CNN

The whole idea of CNN is to downsample the picture, let a function learn only part of a graph, so as to get fewer but more effective features, and finally output the results through a fully connected neural network.

The overall structure is as follows:

Enter a picture

→ convolution: get feature graph (activation graph)

→ ReLU: remove negative valu

→ pooling: reduce the amount of data while retaining the most effective features

(the above steps can be performed multiple times)

→ input fully connected neural network

two。 Convolution layer

CNN-Convolution

The convolution kernel (or kernel, filter, neuron) is to be learned, and the number in the convolution kernel is the weight (parameter).

Do the inner product, multiply each parameter of the convolution kernel with the number of the corresponding position in the image (the elements of the corresponding position are multiplied, which is different from the matrix multiplication), and then sum. It is equivalent to a neuron, which allocates the weight to the input data, and the weight is the data of the convolution kernel. The sum is the result of the first neuron. This weight is used to process all the data of the picture, and the first activation graph or feature graph (feature map) is obtained. We can increase the number of convolution cores and get a multi-layer activation graph, which can better retain the space size of the data.

When the convolution kernel is multiplied and added to the picture, if the number distribution in the region being calculated by the convolution kernel is similar to that of the convolution kernel, the summation result will be very large (called the convolution kernel is activated), while other places will be very small, indicating that the image has a pattern similar to the convolution kernel in this area.

Only one feature can be recognized by a convolution kernel. Therefore, we need to add multiple convolution kernels. The more convolution kernels, the deeper the activation image and the more information of the input image.

For color images, there is no need to separate colors, the depth of convolution kernel and image depth is the same, for example, color is red, green and blue three layers, then convolution core is also three layers.

The convolution layer is equivalent to a downsampled neural network, as shown below, which should have connected 36 neurons, but actually connected 9.

3. Pooled layer

CNN-MaxPooling

Before Max Pooling, that is, pooling layer, you need to do a ReLU function conversion, that is, convert all values less than 0 to 0, and everything else remains the same.

The main purpose of the pooling layer is to reduce the amount of data. after choosing a size, directly replace that size with the maximum value in the size. This reduces the amount of data and thus the amount of computation.

As shown in the following figure, the input data was originally 6-6, but after passing through the convolution layer it becomes 4-4, and after passing through the pooling layer, it becomes 2-2. For the actual picture, the dimension may be very high, so the convolution layer, the pooled layer can be done multiple times.

4. Fully connected layer

The final high-level features are input into the fully connected neural network, that is, the fully connected layer. The fully connected layer is a fully connected neural network, and its number of parameters is the number of data output from the final pooled layer.

Similarly, after the forward propagation, after calculating the loss function, the backward propagation is carried out to get the gradient of each parameter, and the parameters are updated until the best parameter is found.

Therefore, all layers before full connection, no matter how many layers of convolution and pooling, are in order to get better characteristics and reduce the amount of data at the same time. Make the model can be trained better.

The above is how to learn CNN deeply in Python. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report