Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use TPU training Model

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about how to use the TPU training model, many people may not know much about it. In order to make you understand better, the editor has summarized the following contents for you. I hope you can get something from this article.

If you want to try to use TPU on Google Colab to train the model, it is also very convenient, only need to add 6 lines of code.

In Colab Notebook: modify-> Notebook Settings-> Select TPU in hardware Accelerator

Note: the following code can only be executed correctly on Colab.

You can test the effect "tf_TPU" through the following colab link:

Https://colab.research.google.com/drive/1XCIhATyE1R7lq6uwFlYlRsUr5d9_-r1s

% tensorflow_version 2.x

Import tensorflow as tf

Print (tf.__version__)

From tensorflow.keras import *

One, prepare the data MAX_LEN = 300

BATCH_SIZE = 32

(X-ray recorder yearly train), (x-ray test test) = datasets.reuters.load_data ()

X_train = preprocessing.sequence.pad_sequences (Maxwell Maxwell Len)

X_test = preprocessing.sequence.pad_sequences (XerotestMAXLEN)

MAX_WORDS = x_train.max () + 1

CAT_NUM = y_train.max () + 1

Ds_train = tf.data.Dataset.from_tensor_slices ((Xerox girls recounting yearly train)\

.shuffle (buffer_size = 1000) .batch (BATCH_SIZE)\

.prefetch (tf.data.experimental.AUTOTUNE) .cache ()

Ds_test = tf.data.Dataset.from_tensor_slices ((XerotestMagne yearly test))\

.shuffle (buffer_size = 1000) .batch (BATCH_SIZE)\

.prefetch (tf.data.experimental.AUTOTUNE) .cache ()

Second, define the model tf.keras.backend.clear_session ()

Def create_model ():

Model = models.Sequential ()

Model.add (layers.Embedding (MAX_WORDS,7,input_length=MAX_LEN))

Model.add (layers.Conv1D (filters = 64) kernelkeeper size = 5) activation = "relu")

Model.add (layers.MaxPool1D (2))

Model.add (layers.Conv1D (filters = 32) kernelkeeper size = 3) activation = "relu")

Model.add (layers.MaxPool1D (2))

Model.add (layers.Flatten ())

Model.add (layers.Dense (CAT_NUM,activation = "softmax"))

Return (model)

Def compile_model (model):

Model.compile (optimizer=optimizers.Nadam ()

Loss=losses.SparseCategoricalCrossentropy (from_logits=True)

Metrics= [metrics.SparseCategoricalAccuracy (), metrics.SparseTopKCategoricalAccuracy (5)])

Return (model)

Third, the training model # adds the following six lines of code

Import os

Resolver = tf.distribute.cluster_resolver.TPUClusterResolver (tpu='grpc://' + os.environ ['COLAB_TPU_ADDR'])

Tf.config.experimental_connect_to_cluster (resolver)

Tf.tpu.experimental.initialize_tpu_system (resolver)

Strategy = tf.distribute.experimental.TPUStrategy (resolver)

With strategy.scope ():

Model = create_model ()

Model.summary ()

Model = compile_model (model)

History = model.fit (ds_train,validation_data = ds_test,epochs = 10)

After reading the above, do you have any further understanding of how to use the TPU training model? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report