Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to summarize RNN and apply sin to cos fitting

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

How to carry out RNN summary and sin and cos fitting application, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can get something.

I. RNN summary

A simple RNN model consists of an input layer, a hidden layer and an output layer.

When we give the specific diagram corresponding to this abstract diagram, we can clearly see how the hidden layer of the last moment affects the hidden layer of the current moment.

Based on RNN, it can also be extended to bi-directional cyclic neural network and deep cyclic neural network. The RNN formula is as follows:

Define the RNN class as follows:

From torch import nnclass RNN (nn.Module): def _ _ init__ (self): super (RNN, self). _ _ init__ () self.rnn = nn.RNN (input_size=INPUT_SIZE, # The number of expected features in the input `x` hidden_size=32, # The number of features in the hidden state `h` num_layers=1, # Number of recurrent layers batch_first=True # batch dimension If ``True``, tensors as `(batch, seq, feature)`) self.out = nn.Linear (32,1) # Linear transformation def forward (self, x, h_state): out, h_state = self.rnn (x, h_state) return out, h_state

Tips: 1. RNN's training algorithm is BPTT, its basic principle kernel BP algorithm is the same, including the same three steps: 1) forward calculation of the output value of each neuron; 2) reverse calculation of the error term of each neuron, which is the partial derivative of the error function E to the weighted output net_j of neuron j; 3) calculate the gradient of each weight.

2. The gradient vanishing nuclear explosion of RNN. According to the exponential form of the formula, β greater than or less than 1 will cause the problem of gradient vanishing nuclear explosion.

How to avoid: 1) gradient explosion: set a gradient threshold, when the gradient exceeds this threshold can be directly intercepted (Gradient Clipping (pytorch nn.utils.clip_grad_norm)); good parameter initialization methods, such as He initialization; unsaturated activation function (such as ReLU); batch normalization (Batch Normalization); LSTM. 2) gradient disappears: the network LSTM is improved and forget gate is added.

2. Application of sin and cos fitting

The function sin is fitted to cos, and the model black box is similar to sin (π / 2 + α) = cos α.

Import torchfrom torch import nnimport numpy as npimport matplotlib.pyplot as plt# defines hyperparameters TIME_STEP = 10INPUT_SIZE = 1learning_rate = 0.001class RNN (nn.Module): def _ _ init__ (self): super (RNN, self). _ _ init__ () self.rnn = nn.RNN (input_size=INPUT_SIZE, hidden_size=32, num_layers=1) Batch_first=True) self.out = nn.Linear (32,1) def forward (self, x, h_state): # rushiout.shapepurseqbatchlemagem self.out self.out (r_out). Squeeze () return out H_staternn = RNN () criterion = nn.MSELoss () optimizer = torch.optim.Adam (rnn.parameters (), lr=learning_rate) h_state = Noneplt.figure (1, figsize= (12,5)) plt.ion () # enable dynamic interaction for step in range: start, end = step * np.pi, (step + 1) * np.pi steps = np.linspace (start, end, TIME_STEP, dtype=np.float32 Endpoint=False) x_np = np.sin (steps) # x_np.shape: 10 y_np = np.cos (steps) # y_np.shape: 10 x = torch.from_numpy (x_np [np.newaxis,:, np.newaxis]) # x.shape: 110pint 1 y = torch.from_numpy (y_np) # y.shape: 10 prediction, h_state = rnn (x, h_state) h_state = h_state.data loss = criterion (prediction Y) optimizer.zero_grad () loss.backward () optimizer.step () plt.plot (steps, y_np.flatten (), 'rmuri') plt.plot (steps, prediction.data.numpy (). Flatten (), 'bmerry') plt.draw () plt.pause (.05) plt.ioff () plt.show () is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report