Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to get started and practice of TensorFlow depth automatic Encoder

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

How to carry out the introduction and practice of TensorFlow deep automatic encoder, aiming at this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

Contains the complete code for building the Autoencoders model from scratch.

Let's explore an unsupervised learning neural network-Autoencoders (automatic Encoder).

An automatic encoder is a deep neural network used to reproduce input data in the output layer, so the number of neurons in the output layer is exactly the same as the number of neurons in the input layer.

As shown in the following figure:

This image shows the structure of a typical depth automatic encoder. The goal of the automatic encoder network structure is to create a representation of the input at the output layer so that the two are as close (similar) as possible. However, the actual use of the automatic encoder is to obtain a compressed version of the input data with the lowest amount of data loss. The role in machine learning projects is similar to principal component analysis (Principle Component Analysis,PCA). The role of PCA is to find the best and most relevant attributes when training models on datasets with a large number of attributes.

Automatic encoders work in a similar way. The encoder part will compress the input data to ensure that the important data will not be lost, but the overall size of the data will be significantly reduced. This concept is called Dimensionality Reduction.

The disadvantage of dimensionality reduction is that compressed data is a black box, that is, we cannot determine the specific meaning of the data structure in the compressed data. For example, suppose we have a dataset with five parameters, and we train an automatic encoder on that data. The encoder does not omit some parameters in order to get a better representation, it fuses the parameters together (the compressed variables are combined with variables) to create a compressed version, resulting in fewer parameters (for example, from 5 to 3).

An automatic encoder has two parts, an encoder and a decoder.

The encoder compresses the input data, while the decoder restores the uncompressed version of the data based on the compressed representation to create a reconstruction of the input as accurately as possible.

We will use Tensorflow's layers API to create an automatic encoder neural network and test it on the mnist dataset.

First, we import the relevant Python libraries and read into the mnist dataset. If the dataset exists on the local computer, it will be read automatically, otherwise it will be downloaded automatically by running the following command.

Import numpy as np

Import pandas as pd

Import matplotlib.pyplot as plt

Import tensorflow as tf

From tensorflow.examples.tutorials.mnist import input_data

From tensorflow.contrib.layers import fully_connectedmnist=input_data.read_data_sets ("/ MNIST_data/", one_hot=True)

Next, we create some constants for convenience and declare our activation function in advance. The size of the image in the mnist data set is 28 × 28 pixels, that is, 784 pixels, and we compress it to 196 pixels. Of course, you can also further reduce the pixel size. However, compressing too much may cause the automatic encoder to lose information.

Num_inputs=784 # 28x28 pixels

Num_hid1=392

Num_hid2=196

Num_hid3=num_hid1num_output=num_inputslr=0.01

Actf=tf.nn.relu

Now, let's create variables for weights and biases for each layer. Then we create the layer using the activation function we declared earlier.

X=tf.placeholder (tf.float32,shape= [None,num_inputs]) initializer=tf.variance_scaling_initializer () w1=tf.Variable (initializer ([num_inputs,num_hid1]), dtype=tf.float32) w2=tf.Variable (initializer ([num_hid1,num_hid2]), dtype=tf.float32) w3=tf.Variable (initializer ([num_hid2,num_hid3]), dtype=tf.float32) w4=tf.Variable (initializer ([num_hid3,num_output])) Dtype=tf.float32) b1=tf.Variable (tf.zeros (num_hid1)) b2=tf.Variable (tf.zeros (num_hid2)) b3=tf.Variable (tf.zeros (num_hid3)) b4=tf.Variable (tf.zeros (num_output)) hid_layer1=actf (tf.matmul (XPowerw1) + b1) hid_layer2=actf (tf.matmul (hid_layer1,w2) + b2) hid_layer3=actf (tf.matmul (hid_layer2,w3) + b3) output_layer=actf (tf.matmul (hid_layer3,w4) + b4)

In general, TensorFlow projects do not usually use tf.variance_scaling_initializer (). However, we use it here because we are dealing with input of ever-changing sizes. Therefore, the placeholder tensor shape (placeholder is used for input batch processing) adjusts itself to the shape of the input size to prevent us from experiencing any dimensional errors. The subsequent hidden layer is created by simply feeding the previous hidden layer with related weights and biases as input to the activation function (ReLu).

We will use the RMSE loss function for this neural network and pass it to the Adam optimizer. You can also replace these to get more results.

Loss=tf.reduce_mean (tf.square (output_layer-X)) optimizer=tf.train.AdamOptimizer (lr) train=optimizer.minimize (loss) init=tf.global_variables_initializer ()

Now, let's define epochs and batch size and run session. We use the mnist.train.next_batch () of the mnist class to get each new batch. In addition, we will output the training loss after each epoch to monitor its training.

Num_epoch=5

Batch_size=150

Num_test_images=10

With tf.Session () as sess: sess.run (init)

For epoch in range (num_epoch): num_batches=mnist.train.num_examples//batch_size for iteration in range (num_batches): XerobatchMagi yearly batchmnist.room.nextbatch batch (batch_size) sess.run (train) Feed_dict= {X:X_batch}) train_loss=loss.eval (feed_dict= {X:X_batch}) print ("epoch {} loss {}" .format (epoch,train_loss))

Finally, we will write a small drawing function to draw the original image and reconstruct the image to see how the model we trained works.

Results=output_layer.eval (feed_dict= {X:mnist.test.images [: num_test_images]}) # Comparing original images with reconstructions freguency afigplt.subplots (2meme 10meme figsize= (20meme 4))

For i in range (num_test_images): a [0] [I] .imshow (np.reshape (mnist.test.images [I], (2828) a [1] [I] .imshow (np.reshape (results [I], (282828)

Here, we can see that the reconstruction is not perfect, but very close to the original image. Notice in the image above that the reconstruction of 2 looks like 3, which is due to the loss of information during compression.

We can improve the automatic encoder model by adjusting the super parameters, and we can also improve the speed by running training on GPU.

This is the answer to the introduction and practice of TensorFlow deep automatic encoder. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report