Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use TensorFlow to realize convolution Neural Network

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "how to use TensorFlow to achieve convolution neural network". In daily operation, I believe many people have doubts about how to use TensorFlow to achieve convolution neural network. Xiaobian consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubt of "how to use TensorFlow to achieve convolution neural network". Next, please follow the editor to study!

Import tensorflow as tfimport numpy as npimport input_datamnist = input_data.read_data_sets ('data/', one_hot=True) print ("MNIST ready") n_input = 784 # 28 grayscale image The number of pixels 784n_output = 10 # is a 10 classification problem # the weight item weights = {# conv1, the parameter [3Magne3,1,32] specifies the h, w of filter, the dimension of the connected input, and the number of filter, that is, the number of feature graphs generated by 'wc1': tf.Variable (tf.random_normal ([3mag3,1,32], stddev=0.1)), # conv2, where parameter 3 is the same as above. 32 is the depth of the current connection is 32, that is, the number of previous feature images, 64 is the number of output feature graphs' wc2': tf.Variable (tf.random_normal ([3, 3, 32, 64], stddev=0.1)), # fc1, convert the feature graph into vectors, 1024 define 'wd1': tf.Variable ([7 characters 64, 1024], stddev=0.1)), # fc2, do 10 classification tasks. Even 1024 in front. Output 10 category 'wd2': tf.Variable (tf.random_normal ([1024, n_output]) Stddev=0.1)})} "" the size of the feature map is calculated as follows: Fullw = (w-f+2*pad) / s + 1 = (28-3x2x1) / 1 + 1 = 28# indicating that the size of the picture has not been changed after the convolution layer. Flexh = (h-f+2*pad) / s + 1 = (28-3x2x1) / 1 + 1 = 28# the size of the feature map is changed after the first pooling after the pool layer. 28 becomes 14-14 after the second pooling, 14-14 becomes 7-7 That is to say, the final feature map "# offset item biases = {'bc1': tf.Variable (tf.random_normal ([32], stddev=0.1)), # conv1, corresponding to 32 feature maps' bc2': tf.Variable (tf.random_normal ([64], stddev=0.1)), # conv2" Corresponding to 64 feature graphs' bd1': tf.Variable (tf.random_normal ([1024], stddev=0.1)), # fc1, corresponding to 1024 vectors' bd2': tf.Variable (tf.random_normal ([n_output], stddev=0.1)) # fc2, corresponding to 10 output} def conv_basic (_ input, _ w, _ b, _ keep_prob): # INPUT # preprocesses the image and converts it to the format supported by tf That is, [n, h, w, c],-1 is after determining the other three dimensions. Let tf infer the remaining 1D _ input_r = tf.reshape (_ input, shape= [- 1,28,28,1]) # CONV LAYER 1 _ conv1 = tf.nn.conv2d (_ input_r, _ w ['wc1'], strides= [1,1,1,1], padding='SAME') # [1,1,1 1] stride # padding, which represents batch_size, h, w, c, has two choices: 'SAME' (when the window slides There are two options: conv1 = tf.nn.relu (tf.nn.bias_add (_ conv1, _ b ['bc1'])) # convolution layer followed by activation function # maximum value pool, [1, 2, 2, 1] where 1 and 1 correspond to batch_size and channel. 2 pool1 2 corresponds to the pooled _ tf.nn.max_pool (_ conv1, ksize= [1,2 Magi 2,1], strides= [1,2Magine 2,1], padding='SAME') # of 2pai2, which randomly kills some neurons, while _ keepratio preserves the proportion of neurons. For example, 0.6 _ pool_dr1 = tf.nn.dropout (_ pool1, _ keep_prob) # CONV LAYER 2 _ conv2 = tf.nn.conv2d (_ pool_dr1, _ w ['wc2'], strides= [1,1,1,1], padding='SAME') _ conv2 = tf.nn.relu (tf.nn.bias_add (_ conv2, _ b [' bc2'])) _ pool2 = tf.nn.max_pool (_ conv2, ksize= [1,2,2) 1], strides= [1, 2, 2, 1], padding='SAME') _ pool_dr2 = tf.nn.dropout (_ pool2, _ keep_prob) # dropout # VECTORIZE Vectorization # defines the input of the full connection layer Make the output of pool2 a reshape and change it into the form of vector _ densel = tf.reshape (_ pool_dr2, [- 1, _ w ['wd1']. Get_shape (). As_list () [0]]) # FULLY CONNECTED LAYER 1 _ fc1 = tf.nn.relu (tf.add (tf.matmul (_ densel, _ w [' wd1']), _ b ['bd1'])) # w*x+b Then relu _ fc_dr1 = tf.nn.dropout (_ fc1, _ keep_prob) # dropout # FULLY CONNECTED LAYER 2 _ out = tf.add (tf.matmul (_ fc_dr1, _ w ['wd2']), _ b [' bd2']) # w*x+b The result # RETURN out = {'input_r': _ input_r,' conv1': _ conv1, 'pool1': _ pool1,' pool_dr1': _ pool_dr1, 'conv2': _ conv2,' pool2': _ pool2, 'pool_dr2': _ pool_dr2,' densel': _ densel, 'fc1': _ fc1,' fc_dr1': _ fc_dr1 'out': _ out} return outprint ("CNN READY") x = tf.placeholder (tf.float32, [None, n_input]) # use placeholder to preempt space The number of samples is uncertain as Noney = tf.placeholder (tf.float32, [None, n_output]) # use placeholder to preempt space The sample size is uncertain as Nonekeep_prob = tf.placeholder (tf.float32) _ pred = conv_basic (x, weights, biases, keep_prob) ['out'] # Forecast of forward propagation cost = tf.reduce_mean (tf.nn.softmax_cross_entropy_with_logits (_ pred, y)) # Cross entropy loss function optm = tf.train.AdamOptimizer (0.001) .minimize (cost) # gradient descent optimizer _ corr = tf.equal (tf.argmax (_ pred) 1), tf.argmax (y, 1)) # compare the predicted index with the actual label index Same return True, different return Falseaccr = tf.reduce_mean (tf.cast (_ corr, tf.float32)) # convert True or False to 1 or 0, and calculate the mean of all judgment results init = tf.global_variables_initializer () print ("FUNCTIONS READY") # after the above neural network structure is defined The following defines some super parameters training_epochs = 1000 # all sample iterations batch_size = 1000 # Select 1000 samples per iteration display_step = examples LAUNCH THE GRAPHsess = tf.Session () # define a Sessionsess.run (init) # run the initialization operation # OPTIMIZEfor epoch in range (training_epochs): avg_cost = 0 in sess. Total_batch = int (mnist.train.num_examples/batch_size) for i in range (total_batch): batch_xs, batch_ys = mnist.train.next_batch (batch_size) # getting data sess.run (optm, feed_dict= {x: batch_xs, y: batch_ys, keep_prob:0.5}) avg_cost + = sess.run (cost, feed_dict= {x: batch_xs, y: batch_ys) Keep_prob:1.0}) / total_batch if epoch% display_step = 0: train_accuracy = sess.run (accr, feed_dict= {x: batch_xs, y: batch_ys, keep_prob:1.0}) test_accuracy = sess.run (accr, feed_dict= {x: mnist.test.images, y: mnist.test.labels Keep_prob:1.0}) print ("Epoch: d cost:% .9f TRAIN ACCURACY:% .3f TEST ACCURACY:% .3f"% (epoch, training_epochs, avg_cost, train_accuracy, test_accuracy)) print ("DONE")

The graphics card I use is GTX960. When I ran this convolution neural network, the first filter was set to 64 and 128 respectively. As a result, I reported a honey error. Anyway, I was out of memory, so I changed it to 32 and 64 to make the feature map a little less. So, it means to change me for 1080.

I c:\ tf_jenkins\ home\ workspace\ release-win\ gpu\ os\ windows\ tensorflow\ core\ common_runtime\ gpu\ gpu_device.cc:885] Found device 0 with properties: name: GeForce GTX 960major: 5 minor: 2 memoryClockRate (GHz) 1.304pciBusID 0000:01:00.0Total memory: 4.00GiBFree memory: 3.33GiBI c:\ tf_jenkins\ home\ workspace\ release-win\ device\ gpu\ os\ windows\ tensorflow\ core\ common_runtime\ gpu\ gpu_device.cc DMA: 0 I c:\ tf_jenkins\ home\ workspace\ release-win\ device\ gpu\ os\ windows\ tensorflow\ core\ common_runtime\ gpu\ gpu_device.cc:916] 0: Y I c:\ tf_jenkins\ home\ workspace\ release-win\ device\ gpu\ os\ windows\ tensorflow\ core\ common_runtime\ gpu\ gpu_device.cc:975] Creating TensorFlow device (/ gpu:0)-> (device: 0 Name: GeForce GTX 960, pci bus id: 0000 home 01VR 00.0) WC:\ tf_jenkins\ home\ workspace\ release-win\ device\ gpu\ os\ windows\ tensorflow\ core\ common_runtime\ bfc_allocator.cc:217] Ran out of memory trying to allocate 2.59GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.W c:\ tf_jenkins\ home\ workspace\ release-win\ device\ gpu\ os\ windows\ tensorflow\ core\ common_runtime\ bfc_allocator.cc:217] Ran out of memory trying to allocate 1.34GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.W c:\ tf_jenkins\ home\ workspace\ release-win\ device\ gpu\ os\ windows\ tensorflow\ core\ common_runtime\ bfc_allocator.cc:217] Ran out of memory trying to allocate 2.10GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.W c:\ tf_jenkins\ home\ workspace\ release-win\ device\ gpu\ os\ windows\ tensorflow\ core\ common_runtime\ bfc_allocator.cc:217] Ran out of memory trying to allocate 3.90GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory is available.Epoch: 000/1000 cost: 0.517761162 TRAIN ACCURACY: 0.970 TEST ACCURACY: 0.967Epoch: 001/1000 cost: 0.093012387 TRAIN ACCURACY: 0.960 TEST ACCURACY: 0.979... Omitted here, on "how to use TensorFlow to achieve convolution neural network" study is over, I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report