In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
How to analyze the one-dimensional convolution nn.Conv1d of pytorch, aiming at this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
Torch.nn.Conv1d (in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
In_channels (int)-the channel of the input signal. In text classification, it is the dimension of word vector.
Out_channels (int)-the channel generated by convolution. As many out_channels as there are, there are as many 1-dimensional convolution as needed.
Kernel_size (int or tuple)-the size of the convolution kernel. The size of the convolution kernel is (k,). The second dimension is determined by in_channels, so the convolution size is actually kernel_size*in_channels.
Stride (int or tuple, optional)-convolution step, default is 1
Padding (int or tuple, optional)-the number of layers added to 0 for each edge entered. Default is 0.
Dilation (int or tuple, `optional``)-the spacing between convolution kernel elements. Default is 1.
Groups (int, optional)-number of blocked connections from input channel to output channel
Bias (bool, optional)-if bias=True, add bias
In one-dimensional convolution, the input dimension is [batch_size,seq_len,input_size], where the first dimension seq _ len acts as n_channels, but the order of seq_len and input_size must be exchanged before doing one-dimensional convolution, so it will be [batch_size,input_size,seq_len], so the first dimension input _ size acts as the role of n_channels. The convolution occurs in the last dimension, so in fact input_size should be equal to the in_channels in the convolution, while seq_len actually participates in the calculation, and the calculated result is based on the following formula:
Seq_len = ((seq_len + 2 * m.padding [0]-m.dilation [0] * (m.kernel_size [0]-1)-1) / m.stride [0] + 1)
For example:
Conv1 = nn.Conv1d (in_channels=256,out_channels=100,kernel_size=2) input = torch.randn # batch_size x text_len x embedding_size-> batch_size x embedding_size x text_leninput = input.permute out = conv1 (input) print (out.size ())
Here 32 is batch_size,35 for the maximum length of the sentence, and 256 is the word vector.
When entering one-dimensional convolution, you need to transform 32 '3556' 256 into 32 '256' 35, because the one-dimensional convolution is scanned on the last dimension, and the final out size is: 32 '100 * (35-2' 1) = 32 '100 # 34.
This is the answer to the question on how to analyze pytorch's one-dimensional convolution nn.Conv1d. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.