In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces the example analysis of MNIST data set in TensorFlow, which is very detailed and has certain reference value. Friends who are interested must read it!
Introduction to MNIST dataset
MNIST contains 0-9 handwritten numbers, a total of 60000 training sets and 10000 test sets. The format of the data is a grayscale image of a single channel 28 to 28.
Introduction of LeNet Model
LeNet network was first proposed by Yann LeCun et al of New York University in 1998, also known as LeNet5. LeNet is the ancestor of neural network and is known as the "Hello World" of convolution neural network.
Convolution
Pooling (downsampling)
Activation function (ReLU)
LeNet layer by layer analysis 1. The first convolution layer
two。 The first pooled layer
3. The second convolution layer
4. The second pooled layer
5. Fully connected convolution layer
6. Fully connected layer
7. Full connection layer (output layer)
Code implementation guide package from tensorflow.keras.datasets import mnistfrom matplotlib import pyplot as pltimport numpy as npimport tensorflow as tf read & view data #-1. Read & View data-# read data (X_train, y_train), (X_test, y_test) = mnist.load_data () # dataset View print (X_train.shape) # (60000, 28, 28) print (y_train.shape) # (60000,) print (X_test.shape) # (10000, 28) 28) print (y_test.shape) # (10000,) print (type (X_train)) # # Image display plt.imshow (X_train [0], cmap= "Greys") # View the first picture plt.show () data preprocessing #-2. Activation=tf.nn.relu) # first pooled layer pool_layer_1 = tf.keras.layers.MaxPool2D (pool_size= (2,2), padding= "same") # second convolution layer conv_layer_2 = tf.keras.layers.Conv2D (filters=16, kernel_size= (5,5), padding= "valid" Activation=tf.nn.relu) # second pooling layer pool_layer_2 = tf.keras.layers.MaxPool2D (padding= "same") # flattened flatten = tf.keras.layers.Flatten () # first fully connected layer fc_layer_1 = tf.keras.layers.Dense (units=120, activation=tf.nn.relu) # second fully connected layer fc_layer_2 = tf.keras.layers.Dense (units=84, activation=tf.nn.softmax) # output layer output_layer = tf.keras.layers.Dense (units=10 Activation=tf.nn.softmax)
Usage of convolution Conv2D:
Filters: number of convolution kernels
Kernel_size: convolution kernel size
Strides = (1,1): step size
Padding = "vaild": valid for discard, same for make-up
Activation = tf.nn.relu: activate function
Data_format = None: default channels_last
Usage of pooled AveragePooling2D:
Pool_size: the size of the pool
Strides = (1,1): step size
Padding = "vaild": valid for discard, same for make-up
Activation = tf.nn.relu: activate function
Data_format = None: default channels_last
Usage of fully connected Dense:
Units: dimension of output
Activation: activation function
Strides = (1,1): step size
Padding = "vaild": valid for discard, same for make-up
Activation = tf.nn.relu: activate function
Data_format = None: default channels_last
# Model instantiation model = tf.keras.models.Sequential ([tf.keras.layers.Conv2D (filters=6, kernel_size= (5,5), padding='valid', activation=tf.nn.relu, input_shape= (32,32,1)), # relu tf.keras.layers.AveragePooling2D (pool_size= (2,2), strides= (2,2), padding='same'), tf.keras.layers.Conv2D (filters=16, kernel_size= (5,5)) Padding='valid', activation=tf.nn.relu), tf.keras.layers.AveragePooling2D (pool_size= (2,2), strides= (2,2), padding='same'), tf.keras.layers.Flatten (), tf.keras.layers.Dense (units=120, activation=tf.nn.relu), tf.keras.layers.Dense (units=84, activation=tf.nn.relu), tf.keras.layers.Dense (units=10, activation=tf.nn.softmax)]) # Model display model.summary ()
Output result:
Training model #-4. Training model-# set hyperparameter num_epochs = 10 # training rounds batch_size = 1000 # batch size learning_rate = 0.001 # Learning rate # define optimizer adam_optimizer = tf.keras.optimizers.Adam (learning_rate) model.compile (optimizer=adam_optimizer,loss=tf.keras.losses.sparse_categorical_crossentropy,metrics= ['accuracy'])
The usage of complie:
Optimizer: optimizer
Loss: loss function
Metrics: evaluation
With tf.Session () as sess: # initialize all the variables init = tf.global_variables_initializer () sess.run (init) model.fit (x = model.evaluate (X_test, y_test)) # loss value & metrics values
Output result:
The usage of fit:
X: training set
Y: test set
Batch_size: batch size
Enpochs: number of training times
Save the model #-5. Save the model-model.save ('lenet_model.h6') process summary
Complete code from tensorflow.keras.datasets import mnistfrom matplotlib import pyplot as pltimport numpy as npimport tensorflow as tf#-1. Read & View data-# read data (X_train, y_train), (X_test, y_test) = mnist.load_data () # dataset View print (X_train.shape) # (60000, 28, 28) print (y_train.shape) # (60000,) print (X_test.shape) # (10000, 28) 28) print (y_test.shape) # (10000,) print (type (X_train)) # # Image display plt.imshow (X_train [0], cmap= "Greys") # View the first picture plt.show () #-2. Data preprocessing-# format conversion (expanding the picture from 28x28 to 32x32) X_train = np.pad (X_train, ((0,0), (2,2), (2,2)), "constant", constant_values=0) X_test = np.pad (X_test, (0,0), (2,2), (2,2)), "constant" Constant_values=0) print (X_train.shape) # (60000, 32, 32) print (X_test.shape) # (10000, 32, 32) # dataset format transformation X_train = X_train.astype (np.float32) X_test = X_test.astype (np.float32) # data regularization X _ train / = 255X_test / = 25 data dimension transformation X_train = np.expand_dims (X_train Axis=-1) X_test = np.expand_dims (X_test, axis=-1) print (X_train.shape) # (60000, 32, 32, 1) print (X_test.shape) # (10000, 32, 32, 1) #-3. Model building-# first convolution layer conv_layer_1 = tf.keras.layers.Conv2D (filters=6, kernel_size= (5,5), padding= "valid", activation=tf.nn.relu) # first pooled layer pool_layer_1 = tf.keras.layers.MaxPool2D (pool_size= (2,2), padding= "same") # second convolution layer conv_layer_2 = tf.keras.layers.Conv2D (filters=16, kernel_size= (5) 5), padding= "valid", activation=tf.nn.relu) # second pooled layer pool_layer_2 = tf.keras.layers.MaxPool2D (padding= "same") # flattened flatten = tf.keras.layers.Flatten () # first fully connected layer fc_layer_1 = tf.keras.layers.Dense (units=120, activation=tf.nn.relu) # second fully connected layer fc_layer_2 = tf.keras.layers.Dense (units=84 Activation=tf.nn.softmax) # output layer output_layer = tf.keras.layers.Dense (units=10, activation=tf.nn.softmax) # Model instantiation model = tf.keras.models.Sequential ([tf.keras.layers.Conv2D (filters=6, kernel_size= (5,5), padding='valid', activation=tf.nn.relu, input_shape= (32,32,1)), # relu tf.keras.layers.AveragePooling2D (pool_size= (2,2), strides= (2)) ), padding='same'), tf.keras.layers.Conv2D (filters=16, kernel_size= (5,5), padding='valid', activation=tf.nn.relu), tf.keras.layers.AveragePooling2D (pool_size= (2,2), strides= (2,2), padding='same'), tf.keras.layers.Flatten (), tf.keras.layers.Dense (units=120, activation=tf.nn.relu), tf.keras.layers.Dense (units=84, activation=tf.nn.relu) Tf.keras.layers.Dense (units=10, activation=tf.nn.softmax)]) # Model display model.summary () #-4. Training model-# set hyperparameter num_epochs = 10 # training rounds batch_size = 1000 # batch size learning_rate = 0.001 # Learning rate # define optimizer adam_optimizer = tf.keras.optimizers.Adam (learning_rate) model.compile (optimizer=adam_optimizer,loss=tf.keras.losses.sparse_categorical_crossentropy Metrics= ['accuracy']) with tf.Session () as sess: # initialize all variables init = tf.global_variables_initializer () sess.run (init) model.fit. Save the model-model.save ('lenet_model.h6') above is all the content of the article "sample Analysis of MNIST datasets in TensorFlow". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.