Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to create the Eager execution of tensorflow

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "how to create the Eager execution of tensorflow". In the daily operation, I believe that many people have doubts about how to create the Eager execution of tensorflow. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "how to create the Eager execution of tensorflow". Next, please follow the editor to study!

First, the easiest way to start learning TensorFlow is to use Eager Execution. The official tutorial is Colab notebook. If you can't open it, you need a ladder. Refer to something else, such as this: the Eager execution Foundation of tensorflow.

From the Eager execution foundation of tensorflow, I learned:

What is Eager Execution?

"Eager Execution", which is an imperative interface defined by a run, whose operation is executed as soon as it is called from the Python.

This makes it easier to get started with TensorFlow and makes research and development more intuitive.

What are the advantages of Eager Execution?

1. Quickly debug immediate running errors and integrate them with Python tools

2. Support dynamic models with easy-to-use Python control flow.

3. Provide strong support for custom and high-order gradients

4. Applicable to almost all available TensorFlow operations

What is a tensor?

The tensor is a multidimensional array. Like NumPy ndarray objects, Tensor objects have data types and shapes.

In addition, Tensors can reside in the memory of accelerators such as GPU.

TensorFlow provides rich operation libraries (tf.add,tf.matmul,tf.linalg.inv, etc.)

They use and generate Tensors. These operations automatically convert native Python types.

Basic creation and use of tensor

#-*-coding: utf-8-*-

""

@ File: 191206_test_Eager_execution.py

Time: 11:11 on 2019-12-6

@ Author: Dontla

@ Email: sxana@qq.com

@ Software: PyCharm

""

# Import tensorflow

Import tensorflow as tf

Tf.enable_eager_execution ()

# create and use tensors

Print (tf.add (1, 2)) # tf.Tensor (3, shape= (), dtype=int32)

Print (tf.add ([1,2], [3,4])) # tf.Tensor ([46], shape= (2,), dtype=int32)

Print (tf.square (5)) # tf.Tensor (25, shape= (), dtype=int32)

Print (tf.reduce_sum ([1,2,3])) # tf.Tensor (6, shape= (), dtype=int32)

Print (tf.encode_base64 ("hello world")) # tf.Tensor (biconaGVsbG8gd29GybQitron, shape= (), dtype=string)

Print (tf.square (2) + tf.square (3)) # tf.Tensor (13, shape= (), dtype=int32)

X = tf.matmul ([[1]], [[2,3]])

Print (x) # tf.Tensor ([[2 3]], shape= (1, 2), dtype=int32)

Print (x.shape) # (1,2)

Print (x.dtype) #

Properties of tensor

Each Tensor has a shape and data type

X = tf.matmul ([[1]], [[2,3]])

Print (x.shape)

Print (x.dtype)

The most obvious difference between NumPy array and TensorFlow tensors

The tensor can be supported by accelerator memory such as GPU,TPU.

The tensor is immutable.

Conversion between TensorFlow Tensor and NumPy nararrays

The TensorFlow operation automatically converts NumPy ndarrays to Tensors.

The NumPy operation automatically converts Tensors to NumPy ndarrays.

You can explicitly convert a tensor to NumPy ndarrays by calling the .numpy () method on Tensors. These conversions are usually easy because arrays and Tensor share the underlying memory representation if possible. However, sharing the underlying representation is not always possible because the Tensor may be hosted in GPU memory, while the NumPy array is always supported by host memory, so the conversion will involve replication from GPU to host memory.

Import tensorflow as tf

Import numpy as np

Tf.enable_eager_execution ()

Ndarray = np.ones ([3,3])

Print (ndarray)

# [[1. 1. 1.]

# [1. 1. 1.]

Print ("TensorFlow operations convert numpy arrays to Tensors automatically")

Tensor = tf.multiply (ndarray, 42)

Print (tensor)

# tf.Tensor (

# [[42. forty-two。 42.]

# [42. forty-two。 42.]

# [42. forty-two。 42.], shape= (3,3), dtype=float64)

Print ("And NumPy operations convert Tensors to numpy arrays automatically")

Print (np.add (tensor, 1))

# [[43. forty-three。 43.]

# [43. forty-three。 43.]

# [43. forty-three。 43.]]

Print ("The .numpy () method explicitly converts a Tensor to a numpy array")

Print (tensor.numpy ())

# [[42. forty-two。 42.]

# [42. forty-two。 42.]

# [42. forty-two。 42.]]

II. GPU acceleration

Many TensorFlow operations can be accelerated by using GPU for calculations. Without any comments, TensorFlow automatically decides whether to operate using GPU or CPU (you can also copy the tensor between CPU and GPU memory if necessary). The tensor generated by the operation is usually supported by the memory of the device performing the operation. For example:

#-*-coding: utf-8-*-

""

@ File: 191208_test_Eager_execution_once_cls.py

Time: 12:25 on 2019-12-8

@ Author: Dontla

@ Email: sxana@qq.com

@ Software: PyCharm

""

Import tensorflow as tf

Tf.enable_eager_execution ()

X = tf.random_uniform ([3,3])

Print ("Is there a GPU available:")

Print (tf.test.is_gpu_available ()) # True

Print ("Is the Tensor on GPU # 0:")

Print (x.device) # / job:localhost/replica:0/task:0/device:GPU:0

Print (x.device.endswith ('GPU:0')) # True

(1) device name

The Tensor.device property provides the fully qualified string name of the device that hosts the Tensor content. This name encodes a set of details, such as the identifier of the network address of the host that is executing this program, and the devices in that host. This is necessary for distributed execution of TensorFlow programs, but we will not do so for the time being. If the tensor is on the nth tensor on the host, the string ends with GPU:.

(2) display device configuration

The term "placement" in TensorFlow refers to how to assign (place) individual operations to performing devices. As mentioned above, when no clear guidance is provided, TensorFlow automatically determines the device on which to perform the operation and copies the Tensors to that device as needed. However, you can use the tf.device context manager to explicitly place TensorFlow operations on a specific device.

At this point, the study on "how to create the Eager execution of tensorflow" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report