Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the ways in which TensorFlow trains the network?

2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article will share with you about the way TensorFlow trains the network. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

There are two ways for TensorFlow to train a network, one is based on tensor (array), the other is an iterator

The difference between the two ways is:

The first is to load all the data to form a tensor, then call model.fit () and then specify the parameter batch_size to train all the data in batches.

The second is to first form an iterator by dividing the data into batches, and then traverse the iterator to train the data of each batch.

Method 1: load the dataset (train_images, train_labels), (val_images, val_labels) = tf.keras.datasets.mnist.load_data () # step2: normalize the image train _ images, val_images = train_images / 255.0 through the iterator IMAGE_SIZE = 100 trains step1: load the dataset (train_images, dataset), (val_images, val_labels) = train () # step2 Val_images / 255.training step3: set the training set size train_images = train_images [: IMAGE_SIZE] val_images = val_images [: IMAGE_SIZE] train_labels = train_labels [: IMAGE_SIZE] val_labels = val_labels [: IMAGE_SIZE] # step4: change the dimension of the image to (IMAGE_SIZE,28,28,1) train_images = tf.expand_dims (train_images, axis=3) val_images = tf.expand_dims (val_images Axis=3) # step5: change the size of the image to (32) train_images = tf.image.resize (train_images, [32)) val_images = tf.image.resize (val_images, [32)) # step6: change the data into an iterator train_loader = tf.data.Dataset.from_tensor_slices ((train_images, train_labels)) .batch (32) val_loader = tf.data.Dataset.from_tensor_slices ((val_images) Val_labels) .batch (IMAGE_SIZE) # step5: import model model = LeNet5 () # Let the model know the form of input data model.build (input_shape= (1,32,32,1)) # end Output Shape is multiplemodel.call (Input (shape= (32,32,1)) # step6: compile model model.compile (optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy (from_logits=True)) Metrics= ['accuracy']) # weight save path checkpoint_path = ". / weight/cp.ckpt" # callback function User saves weight save_callback = tf.keras.callbacks.ModelCheckpoint (filepath=checkpoint_path, save_best_only=True, save_weights_only=True, monitor='val_loss') Verbose=0) EPOCHS = 11for epoch in range (1 EPOCHS): # per batch training set error train_epoch_loss_avg = tf.keras.metrics.Mean () # per batch training set precision train_epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy () # per batch verification set error val_epoch_loss_avg = tf.keras.metrics.Mean () # per batch validation set precision val_epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy () for x Y in train_loader: history = model.fit (x, y, validation_data=val_loader, callbacks= [save _ callback], verbose=0) # Update error Keep the precision of the last train_epoch_loss_avg.update_state (history.history ['loss'] [0]) # update Keep last train_epoch_accuracy.update_state (y, model (x, training=True)) val_epoch_loss_avg.update_state (history.history ['val_loss'] [0]) val_epoch_accuracy.update_state (next (iter (val_loader)) [1], model (next (iter (val_loader)) [0]) Training=True) # use .result () to calculate the error and precision results of each batch print ("Epoch {: d}: trainLoss: {: .3f}, trainAccuracy: {: .3%} valLoss: {: .3f}, valAccuracy: {: .3%}" .format (epoch) Train_epoch_loss_avg.result (), train_epoch_accuracy.result () Val_epoch_loss_avg.result () Val_epoch_accuracy.result ()) method 2: apply model.fit () for batch training import model_sequential (train_images, train_labels), (test_images Test_labels) = tf.keras.datasets.mnist.load_data () # step2: normalize the image train _ images, test_images = train_images / 255.0, test_images / 255.train step3: change the dimension of the image to (60000 axis=3 2828) train_images = tf.expand_dims (train_images, axis=3) test_images = tf.expand_dims (test_images, axis=3) # step4: change the image size to (6000,000,32mer1) train_images = tf.image.resize (train_images, [32] 32]) test_images = tf.image.resize (test_images, [32,32]) # step5: import the model # history = LeNet5 () history = model_sequential.LeNet () # Let the model know the form of the input data history.build (input_shape= (1,32,32,1)) # history (tf.zeros ([1,32,32,1])) # ending Output Shape is multiplehistory.call (Input (shape= (32,32) 1)) history.summary () # step6: compilation model history.compile (optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy (from_logits=True), metrics= ['accuracy']) # weight saving path checkpoint_path = ". / weight/cp.ckpt" # callback function User saves weight save_callback = tf.keras.callbacks.ModelCheckpoint (filepath=checkpoint_path, save_best_only=True, save_weights_only=True, monitor='val_loss') Verbose=1) # step7: training model history = history.fit (train_images, train_labels, epochs=10, batch_size=32, validation_data= (test_images, test_labels) Callbacks= [save _ callback]) Thank you for reading! This is the end of this article on "what are the ways of TensorFlow training network?". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report