In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
Editor to share with you the TensorFlow convolution neural network CNN example analysis, I believe that most people do not know much about it, so share this article for your reference, I hope you will learn a lot after reading this article, let's go to know it!
I. Overview of convolution neural networks
Convolution Neural Network (ConvolutionalNeural Network,CNN) was originally designed to solve the problems of image recognition. The application of CNN is not limited to images and videos, but also to time series signals, such as audio signals and text data. The original demand of CNN as a deep learning architecture is to reduce the requirements of image data preprocessing and avoid complex feature engineering. In the convolution neural network, the first convolution layer will directly accept the pixel-level input of the image, and each layer of convolution (filter) will extract the most effective features in the data. This method can extract the most basic features in the image, and then combine and abstract to form higher-order features, so CNN has the invariance of image scaling, translation and rotation in theory.
The key points of convolutional neural network CNN are local connection (LocalConnection), weight sharing (Weights Sharing) and downsampling (Down-Sampling) in pooling layer (Pooling). Among them, the local connection and weight sharing reduce the number of parameters, greatly reduce the training complexity and reduce the overfitting. At the same time, weight sharing also gives the convolution network the tolerance to translation, while the pool layer downsampling further reduces the number of output parameters and gives the model tolerance to mild deformation, which improves the generalization ability of the model. The convolution operation can be understood as the process of extracting similar features from multiple positions of the image with a small number of parameters.
Spatial arrangement of the convolution layer: the connection between each neuron in the convolution layer and the input data volume has been explained above, but the number of neurons in the output data volume and their arrangement have not been discussed. Three super parameters control the size of the output data volume: depth (depth), step size (stride) and zero fill (zero-padding). First, the depth of the output data volume is a superparameter: it coincides with the number of filters used, and each filter looks for something different in the input data. Secondly, when sliding the filter, the step size must be specified. Sometimes it is convenient to fill the input data body at the edge with 0. The size of this zero fill (zero-padding) is a superparameter. Zero filling has a good property, that is, you can control the space size of the output data volume (the most commonly used is to maintain the spatial size of the input data volume, so that the input and output width and height are equal). The spatial size of the output data volume can be calculated by a function of the input data volume size (W), the receptive field size of the neurons in the convolution layer (F), the step size (S) and the number of zero padding (P). (it is assumed that the spatial shape of the input array is square, that is, the height and width are equal.) the spatial size of the output data volume is (WmurF + 2P) / Song1. In calculation, the length and width of the input data volume are calculated according to this formula, and the depth depends on the number of filters. Limitation of step size: note that the superparameters of these spatial arrangements are mutually limited. For example, when the input size is 10, if zero padding is not used, then zero is used, and the filter size is Full3, so the step size Spati2 does not work, and the result 4.5 is not an integer, which means that the neuron cannot slide through the input data neatly and symmetrically.
The aggregation layer uses MAX operation to operate each depth slice of the input data volume independently to change its spatial size. The most common form is that the aggregation layer uses a filter of size 2x2 to desample each depth slice with a step size of 2, discarding 75% of the activation information. Each MAX operation takes the maximum of four numbers (that is, the area of a 2x2 in a deep slice). The depth remains the same.
Second, the structure of convolution neural network.
Convolution neural networks are usually composed of three layers: convolution layer, convergence layer (unless otherwise specified, generally maximum convergence) and fully connected layer (fully-connected referred to as FC). The ReLU activation function should also be regarded as a layer, which activates the function element by element.
The most common form of convolution neural network is to put some convolution layers and ReLU layers together, followed by the convergence layer, and then repeat this until the image is reduced to a small enough size in space, and it is common to transition to a fully connected layer somewhere. The final full connection layer gets the output, such as classification score and so on.
The most common convolution neural network structure is as follows:
INPUT-> [[CONV-> RELU] * N-> POOL?] * M-> [FC-> RELU] * K-> FC
Where * refers to the number of repeats, POOL? Refers to an optional aggregation layer. Where N > = 0, usually N > = 0, usually K > = 0
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.