In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
In this article, the editor introduces in detail "how to achieve Dropout in RNN". The content is detailed, the steps are clear, and the details are handled properly. I hope this article "how to achieve Dropout in RNN" can help you solve your doubts.
We can simply add a DropOut layer before or after RNN, but if we want to add DropOut in the middle of the RNN layer, we have to use DropoutWrapper. The following code applies Dropout to the inputs of each RNN layer, with a 50% probability of discarding each input.
Keep_prob = 0.5
Cell = tf.contrib.rnn.BasicRNNCell (num_units=n_neurons)
Cell_drop = tf.contrib.rnn.DropoutWrapper (cell, input_keep_prob=keep_prob)
Multi_layer_cell = tf.contrib.rnn.MultiRNNCell ([cell_drop] * n_layers)
Rnn_outputs, states = tf.nn.dynamic_rnn (multi_layer_cell, X, dtype=tf.float32)
Of course, we can also dropout the output by setting output_keep_prob.
In fact, careful children's shoes may have found that the above code is problematic, because when we apply Dropout in the previous CNN, there is a placeholder of is_training to distinguish between training and testing applications. But the above code does not. Indeed, the biggest problem with the above code is that Dropout is also used in testing, which is not what we want, of course. Unfortunately, DropoutWrapper does not support is_training 's placeholder, so either we rewrite a DropoutWapper class ourselves, or we have two calculation graphs, one for training and the other for testing. Here we take a look at how the two calculation diagrams are implemented, as follows:
Import sys
Is_training = (sys.argv [- 1] = = "train")
X = tf.placeholder (tf.float32, [None, n_steps, n_inputs])
Y = tf.placeholder (tf.float32, [None, n_steps, n_outputs])
Cell = tf.contrib.rnn.BasicRNNCell (num_units=n_neurons)
If is_training:
Cell = tf.contrib.rnn.DropoutWrapper (cell, input_keep_prob=keep_prob)
Multi_layer_cell = tf.contrib.rnn.MultiRNNCell ([cell] * n_layers)
Rnn_outputs, states = tf.nn.dynamic_rnn (multi_layer_cell, X, dtype=tf.float32)
[...] # build the rest of the graph
Init = tf.global_variables_initializer ()
Saver = tf.train.Saver ()
With tf.Session () as sess:
If is_training:
Init.run ()
For iteration in range (n_iterations):
[...] # train the model
Save_path = saver.save (sess, "/ tmp/my_model.ckpt")
Else:
Saver.restore (sess, "/ tmp/my_model.ckpt")
[...] # use the model
After reading this, the article "how to achieve Dropout in RNN" has been introduced. If you want to master the knowledge points of this article, you still need to practice and use it yourself. If you want to know more about related articles, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.