In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "Tensorflow how to load variable names and values", the content is easy to understand, clear, hope to help you solve your doubts, the following let Xiaobian lead you to study and learn "Tensorflow how to load variable names and values" this article.
The code for loading variable names and variable values from these checkpoint files is as follows:
Model_dir ='. / ckpt-182802'import tensorflow as tffrom tensorflow.python import pywrap_tensorflowreader = pywrap_tensorflow.NewCheckpointReader (model_dir) var_to_shape_map = reader.get_variable_to_shape_map () for key in var_to_shape_map: print ("tensor_name:", key) print (reader.get_tensor (key)) # Remove this is you want to print only variable namesMnist
An example of handwritten digit recognition based on convolution neural network is given below:
#-*-coding: utf-8-*-import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_datafrom tensorflow.python.framework import graph_utillog_dir ='. / tensorboard'mnist = input_data.read_data_sets (train_dir= ". / mnist_data", one_hot=True) if tf.gfile.Exists (log_dir): tf.gfile.DeleteRecursively (log_dir) tf.gfile.MakeDirs (log_dir) # define input data mnist picture size 28 "28" 1 "784 None stands for batch_sizex = tf.placeholder (dtype=tf.float32,shape= [None,28*28], name= "input") # defines label data, mnist has 10 categories y _ = tf.placeholder (dtype=tf.float32,shape= [None,10], name= "y _") # adjusts the data to 2D data The first layer, convolutional core = {5: 5, 1: 32}, convolutional core W1 = tf.Variable (shape= (shape= [5pje 5pje 1pg32], stddev=0.1,dtype=tf.float32,name= "w1") b1 = tf.Variable (initial_value=tf.zeros (shape= [32])) conv1 = tf.nn.conv2d (input=image,filter=w1,strides=, padding= "SAME", name= "conv1") relu1 = tf.nn.relu (tf.nn.bias_add (conv1,b1)) Name= "relu1") pool1 = tf.nn.max_pool (value=relu1,ksize=, strides=, padding= "SAME") # shape= {None Layer 2, convolution kernel = {5: 5: 32: 64} W2 = tf.Variable (shape= (shape= [5pje 5pint 32pint 64], stddev=0.1,dtype=tf.float32,name= "w2")) b2 = tf.Variable (initial_value=tf.zeros (shape= [64])) conv2 = tf.nn.conv2d (input=pool1,filter=w2,strides=, padding= "SAME") relu2 = tf.nn.relu (tf.nn.bias_add (conv2,b2), name= "relu2") pool2 = tf.nn.max_pool (value=relu2 Ksize=, strides=, padding= "SAME", name= "pool2") # shape= {None 7name= 7input3 64} # FC1w3 = tf.Variable (initial_value=tf.random_normal (shape= [7m7ml64j1024], stddev=0.1,dtype=tf.float32,name= "w3")) b3 = tf.Variable (initial_value=tf.zeros (shape= [1024])) # key, do reshapeinput3 = tf.reshape (pool2,shape= [- 1m7ml7ml64], name= "input3") fc1 = tf.nn.relu (tf.nn.bias_add (value=tf.matmul (input3,w3), bias=b3), name= "fc1") # shape= {None 1024} # FC2w4 = tf.Variable (initial_value=tf.random_normal (shape= [1024 fc1,w4 10], stddev=0.1,dtype=tf.float32,name= "w4")) b4 = tf.Variable (initial_value=tf.zeros (shape= [10])) fc2 = tf.nn.bias_add (value=tf.matmul (fc1,w4), bias=b4,name= "logit") # shape= {None 10} # define cross entropy loss # use softmax to express NN calculated output as probability y = tf.nn.softmax (fc2,name= "out") # define cross entropy loss function cross_entropy = tf.nn.softmax_cross_entropy_with_logits (logits=fc2,labels=y_) loss = tf.reduce_mean (cross_entropy) tf.summary.scalar ('Cross_Entropy') Loss) # define solvertrain = tf.train.AdamOptimizer (learning_rate=0.0001) .minimize (loss=loss) for var in tf.trainable_variables (): print var#train = tf.train.AdamOptimizer (learning_rate=0.0001) .minimize (loss=loss) # define the correct value, determine whether the two subscript index are equal or not correct_predict = tf.equal (tf.argmax (YL1), tf.argmax (YL1)) # define how to calculate the accuracy accuracy = tf.reduce_mean (tf.cast (correct_predict) Dtype=tf.float32), name= "accuracy") tf.summary.scalar ('Training_ACC',accuracy) # define initialization opmerged = tf.summary.merge_all () init = tf.global_variables_initializer () saver = tf.train.Saver () # training NNwith tf.Session () as session: session.run (fetches=init) writer = tf.summary.FileWriter (log_dir,session.graph) # define the location where logs are recorded for i in range (0500): xs Ys = mnist.train.next_batch (100session.run) if I = 0: train_accuracy,summary = session.run (fetches= [accuracy,merged], feed_dict= {xvid xsdir yellows session.run}) writer.add_summary (summary,i) print (I, "accuracy=", train_accuracy)''# after the training is completed Convert the weights in the network into constants to form a constant graph Note: need x and label constant_graph = graph_util.convert_variables_to_constants (sess=session, input_graph_def=session.graph_def, output_node_names= ['out','y_','input']) # to serialize graph with weights Write to pb file and store with tf.gfile.FastGFile ("lenet.pb", mode='wb') as f: f.write (constant_graph.SerializeToString ())''saver.save (session,'./ckpt')
Add: the method of viewing the contents of checkpoint files generated by tensorflow
Tensorflow often uses tf.train.Saver (). Save function to save weights when saving the weight model, and the saved ckpt file can not be opened directly, but tensorflow provides a related function tf.train.NewCheckpointReader to view the weights of the ckpt file.
Import osfrom tensorflow.python import pywrap_tensorflowcheckpoint_path = os.path.join ('modelckpt', "fc_nn_model") # Read data from checkpoint filereader = pywrap_tensorflow.NewCheckpointReader (checkpoint_path) var_to_shape_map = reader.get_variable_to_shape_map () # Print tensor name and valuesfor key in var_to_shape_map: print ("tensor_name:", key) print (reader.get_tensor (key))
Where 'modelckpt' is the folder where the .ckpt files are stored, and "fc_nn_model" is the file name.
Var_to_shape_map is a dictionary where the key value is the name of the variable and the corresponding value is the shape of the variable, as shown in
{'LSTM_input/bias_LSTM/Adam_1':]}
When you want to see the value of a variable, you need to call the get_tensor function, that is, enter the following code:
Reader.get_tensor ('LSTM_input/bias_LSTM/Adam_1') above is all the content of the article "how Tensorflow loads variable names and values". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.