Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How RNN trains and predicts timing signals

2025-03-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

How to train and predict timing signals in RNN, this article introduces the corresponding analysis and answer in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

Last time, we used RNN to make a simple handwriting classifier.

Today we learn how RNN trains and predicts time series signals, such as stock prices, temperatures, brain waves, and so on.

Each training sample randomly selects 20 continuous values from the time series signal, and the corresponding goal of the training sample is 20 continuous values after a step is translated in the direction of the next time, that is, except for the last value, the previous value is the same as the last 19 values of the training sample. As shown below:

First of all, we create a RNN network, which includes 100 cyclic neurons. Because the length of the training sample is 20, we expand it into 20 time segments. Each input contains an eigenvalue (the value at that time). Similarly, the target also contains 20 inputs, with the following code, similar to the previous one:

N_steps = 20

N_inputs = 1

N_neurons = 100

N_outputs = 1

X = tf.placeholder (tf.float32, [None, n_steps, n_inputs])

Y = tf.placeholder (tf.float32, [None, n_steps, n_outputs])

Cell = tf.contrib.rnn.BasicRNNCell (num_units=n_neurons, activation=tf.nn.relu)

Outputs, states = tf.nn.dynamic_rnn (cell, X, dtype=tf.float32)

At every moment, we have an output vector with a size of 100, but in fact, what we need is an output value. The easiest way is to wrap a circulating neuron in an Out putProjectionWrapper. The wrapper works like a circular neuron, but overlays other functions. For example, it adds a fully connected layer of linear neurons at the output of circulating neurons (which does not affect the state of circulating neurons). All neurons in the fully connected layer share the same weight and bias. As shown below:

Wrapping a circular neuron is fairly simple, and you only need to fine-tune the previous code to convert a BasicRNNCell to OutputProjectionWrapper, as follows:

Cell = tf.contrib.rnn.OutputProjectionWrapper (

Tf.contrib.rnn.BasicRNNCell (num_units=n_neurons, activation=tf.nn.relu)

Output_size=n_outputs)

So far, we can define the loss function, just as we did before, using the mean square deviation (MSE). Next, create another optimizer, and select the Adam optimizer here. For how to choose an optimizer, see the previous article:

Deep learning algorithm (issue 5)-Optimizer selection in deep learning

Learning_rate = 0.001

Loss = tf.reduce_mean (tf.square (outputs-y))

Optimizer = tf.train.AdamOptimizer (learning_rate=learning_rate)

Training_op = optimizer.minimize (loss)

Init = tf.global_variables_initializer ()

Then comes the execution phase:

N_iterations = 10000

Batch_size = 50

With tf.Session () as sess:

Init.run ()

For iteration in range (n_iterations):

X_batch, y_batch = [...] # fetch the next training batch

Sess.run (training_op, feed_dict= {X: X_batch, y: y_batch})

If iteration% 100 = 0:

Mse = loss.eval (feed_dict= {X: X_batch, y: y_batch})

Print (iteration, "\ tMSE:", mse)

The output is as follows:

0 MSE: 379.586

100 MSE: 14.58426

200 MSE: 7.14066

300 MSE: 3.98528

400 MSE: 2.00254

[...]

Once the model has been trained, it can be used to predict:

X_new = [...] # New sequences

Y_pred = sess.run (outputs, feed_dict= {X: X_new})

The following figure shows the prediction result of the model after the above code is trained for 1000 iterations: although using OutputProjectionWrapper is the easiest way to reduce the dimension of the output sequence of RNN to a value, it is not the most efficient. Here's a trick to be more efficient: first convert the shape output of RNN from [batch_size, n_steps, n_neurons] to [batch_size * n_steps, n_neurons], then output a [batch_size * n_steps, n_outputs] tensor with a full connection layer with appropriate size, and finally convert the tensor to [batch_size, n_steps, n_outputs]. As shown in the following figure: this solution is not difficult to implement. There is no need for an OutputProjectionWrapper wrapper, only a BasicRNNCell, as follows:

Cell = tf.contrib.rnn.BasicRNNCell (num_units=n_neurons, activation=tf.nn.relu)

Rnn_outputs, states = tf.nn.dynamic_rnn (cell, X, dtype=tf.float32)

Then, we reshape the result, fully connect the layer, and then reshape, as follows:

Stacked_rnn_outputs = tf.reshape (rnn_outputs, [- 1, n_neurons])

Stacked_outputs = fully_connected (stacked_rnn_outputs, n_outputs

Activation_fn=None)

Outputs = tf.reshape (stacked_outputs, [- 1, n_steps, n_outputs])

The next code is the same as before, and since only one full connection layer is used this time, the speed is much faster than before.

This is the answer to the question about how to train and predict timing signals in RNN. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report