Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to analyze TensorFlow at a glance

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how to carry out TensorFlow overview analysis, the content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.

Preface

Due to the changes in the company's products, I need to write some TensorFlow code to test the deep learning module of our products. I have also been learning TensorFlow recently. So pause the update of the convolution neural network first. Start writing something about tensorflow. After talking about the theory of vernacular for such a long time, I can finally put some code with you. For tensorflow, I will call it tf for short.

Environment building

I didn't study this in detail, just find the image download of TensorFlow on docker hub and start it. Or you can build a notebook integrated with TensorFlow.

What is a tensorflow neural network with two classes and a single hidden layer?

Code first.

Import tensorflow as tf

X = tf.constant ([[0.7 dint 0.9]])

Y _ = tf.constant ([[1. 0]])

W1 = tf.Variable (tf.random_normal ([2jue 3], stddev=1, seed=1))

W2 = tf.Variable (tf.random_normal ([3je 1], stddev=1, seed=1))

B1 = tf.Variable (tf.zeros ([3]))

B2 = tf.Variable (tf.zeros ([1]))

A = tf.nn.relu (tf.matmul (xmemw1) + b1)

Y = tf.matmul (a, w2) + b2

Cost = tf.nn.sigmoid_cross_entropy_with_logits (logits=y, labels=y_, name=None)

Train_op = tf.train.GradientDescentOptimizer (0.01) .minimize (cost)

With tf.Session () as sess: init = tf.initialize_all_variables () sess.run (init) for i in range: sess.run (train_op) print (sess.run (w1)) print (sess.run (w2))

This is the code that simulates the simplest neural network. There is only one sample and two feature dimensions (only used as demo), and the hidden layer uses relu as the activation function. There are three neurons, and the purpose of the whole neural network is to make two classifications, so the activation function of the output layer is sigmoid, and the loss function (cost function) is also the loss function of sigmoid. The forward propagation algorithm is implemented by matrix multiplication. Back propagation is a standard gradient decline provided by tf. The learning rate is 0. 01 and the number of training rounds is 100. No regularization is added. We check the weight information of the hidden layer and the output layer after training as follows:

Why use tensorflow?

If some students have used the neural network library that highly encapsulates API like Keras, they will find that the code of tf is simply too complex and cumbersome. The above is only the simplest neural network, no normalization, no mini batch or even a sample, even to achieve forward propagation have to write their own code. In practical application, the code will be much more complex. But using Keras can be a matter of a few lines of code, making it much easier to learn. Then why do we still use tf? The advantage I know is that tf is flexible and supports distributed execution, such as tf on K8s is very popular now. It is the only choice to make a deep learning platform now. As far as I know, the deep learning platforms of major factories are all made with tf. Of course, for me, there is only one purpose for learning tf, which is to test our own deep learning platform.

Basic concepts of tensorflow

Tensorflow = tensor + flow. Tensor is the data structure of tf, and flow represents the computing framework of tf-computing chart.

Tensor, also known as tensor. In tf, all data is expressed in the form of tensors. Functionally, the tensor is a multi-dimensional array. For example, in our demo above, the x we use to simulate the sample data and the corresponding label y _ are both tensor. However, the implementation of tensor in tf does not directly use arrays or any other known data structures. If similar, the mechanism of tensor is a bit like the lazy calculation of RDD in spark. In spark, our calculation flow is lazy, and each transform is not actually executed when the code is executed. Instead, the calculation process is recorded until it is executed to action. This is inert calculation. The computational performance of tensor and tensor is similar. When we execute the following code, we don't get the real result, but we get a reference to the result.

We declare two constants in tf, an and b. They are tensor (tensors) in tf. When we perform an addition operation, the printed result is not their calculation. It's the result of a tensor. The first field printed: add_35:0 indicates the node add_35, and 0 means that the c tensor is the first output of the node add_35. The second field printed is shape, which is the dimension. Indicate that this tensor is a matrix of several times. In our example, shape= (1 ~ 2) shows that this is a 1 ~ 2 matrix. The third field is the type of tensor, which we have here is the float32 type.

If it is expressed in the way of drawing, it is probably like the following

Each node in the figure above is an operation, and each edge represents the dependency between calculations, and the input of one operation depends on the output of another operation. This is the calculation diagram of tf, which is what flow stands for in tensorflow. Until tf encounters the piece of code that really needs to be executed, all calculations are not triggered. Instead, tf maintains a calculation graph internally. When it needs to be executed, it is calculated according to the calculation diagram in turn.

The execution node of the calculation graph-- session

We just talked about how tf organizes data and computational diagrams. It does not really execute, but waits for some time to come and calculates according to the calculation diagram in turn. Then this time is conversation. As follows:

You can see in the figure above that we use python's with as Dafa to manage the session declaration cycle of tf. Instead of taking care of an and b, we print out the results with only one line of code print (session.run (c)). As I said before, tf maintains a calculation graph. When we actually calculate c. It will be calculated from scratch according to the calculation chart.

On how to conduct an overview of TensorFlow analysis to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report