In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you what is the difference between F.avg_pool1d () and F.avg_pool2d () in pytorch, I believe most people don't know much about it, so share this article for your reference. I hope you will gain a lot after reading this article. Let's learn about it together.
F.avg_pool1d () data is 3D input
Input dimension: (batch_size,channels,width) channel can be regarded as height
Kenerl dimension: (one dimension: represents the span of the width) channel and the input channel can be regarded as the height of the matrix
If kernel_size=2 is assumed, the average of every two columns is calculated. Stride is consistent with kernel_size by default, and is discarded if it crosses the boundary. (the following shows that 1 and 3 columns add up to average)
Input = torch.tensor (input = torch.tensor). Float () print (input) m = F.avg_pool1d (input,kernel_size=2) mtensor ([[1.1,1.1.1.1.1.1.1], [1.1.1.1.1.1.1.1.1], [0.000,0.00. 0, 1, 1.], [1, 1, 1, 1.], [1, 1, 1, 1.]]) tensor ([[1.0000, 1.0000], [1.0000, 1.0000], [0.0000, 0.5000], [1.0000, 1.0000], [1.0000, 1.0000]])
Suppose kenerl_size=3, which means that the first three columns are added to average, and the next three columns are discarded.
Input = torch.tensor (input = torch.tensor). Float () print (input) m = F.avg_pool1d (input,kernel_size=3) mtensor ([[1.1,1.1.1.1.1.1.1], [1.1.1.1.1.1.1.1.1], [0.000,0.00. 0, 1, 1.], [1, 1, 1, 1.], [1, 1, 1, 1.]]) tensor ([1.], [1.], [0.], [1.], [1.]]) input = torch.tensor Unsqueeze (0). Float () print (input) m = F.avg_pool1d (input,kernel_size=4) mtensor ([1, 1, 1, 1, 1.], 1.], [0, 0, 0, 1.], [1. 1, 1.], [1, 1., 1.]]) tensor ([1.0000], [1.0000], [0.2500], [1.0000], [1.0000])
Suppose stride=1 moves one step at a time
Input = torch.tensor (input = torch.tensor). Float () print (input) m = F.avg_pool1d (input,kernel_size=2,stride=1) mtensor ([[1.1,1.1.1.1.1.1.1], [1.1.1.1.1.1.1.1.1], [0.000,0.00. 0, 1, 1.], [1, 1, 1, 1.], [1, 1, 1, 1.]) tensor ([[1.0000, 1.0000, 1.0000, 1.0000], [1.0000, 1.0000, 1.0000, 1.0000], [0.0000, 0.0000, 0.5000, 1.0000] [1.0000, 1.0000, 1.0000, 1.0000], [1.0000, 1.0000, 1.0000, 1.0000]]) input = torch.tensor ([1 float () print (input) m = F.avg_pool1d (input,kernel_size=4)) Stride=1) mtensor ([[1, 1, 1, 1, 1.], [1, 1, 1, 1.], [0, 0, 0, 1, 1.], [1, 1, 1, 1.], [1, 1, 1, 1.]]) tensor ([[1.0000, 1.0000]]) [1.0000, 1.0000], [0.2500, 0.5000], [1.0000, 1.0000], [1.0000, 1.0000]]) F.avg_pool2d () data is a four-dimensional input
Input Dimension: (batch_size,channels,height,width)
Kenerl dimension: (2D: represents the span of width) channel is the same as the input channle, if the data is 3D, then channel is 1. (if you write only one number, nfocus kenerl = (nmenn))
Stride is consistent with kenerl by default, which is two-dimensional, so it is consistent with kenerl on both height and width, and is also discarded when crossing the boundary.
Consistent with cnn convolution
Input = torch.tensor ([1) print (input.size ()) print (input) m = F.avg_pool2d (input,kernel_size= (4)) mtorch.Size ([1,5,5]) tensor ([[1,5,5]) tensor 1, 1.], [0, 0, 0, 1, 1.], [1, 1, 1, 1.], [1, 1, 1, 1.], [1, 1, 1, 1, 1.]]) tensor ([0.8125]) input = torch.tensor ([[1mel]]) input = torch.tensor. Float () print (input.size ()) print (input) m = F.avg_pool2d (input,kernel_size= (4), stride=1) mtorch.Size ([1,5,5]) tensor ([[1.1,1.1.1.1.1.], [1.1.1.1.1.1.1.], [0.0.0.0.1.1.1.1.], [1. 1, 1, 1.], [1, 1, 1, 1.]]) tensor ([0.8125, 0.8750], [0.8125, 0.8750])
If you find the average kenerl= of the column (1), the default is stride= (1).
Input = torch.tensor ([1 print () input.size () print (input) m = F.avg_pool2d (input,kernel_size= (1) mtorch.Size ([1J 5])) tensor ([[1.1,1.1.1.1.1.1], [1.1.1]. 1, 1, 1.], [0, 0, 0, 1, 1.], [1, 1, 1, 1.], [1, 1, 1, 1.]]) tensor ([1.0000], [1.0000], [0.4000], [1.0000]) [1.0000])
If you find the average kenerl= of a row (5d1), then the default stride= (5d1), think with the concept of convolution
Input = torch.tensor ([1) print (input.size ()) print (input) m = F.avg_pool2d (input,kernel_size= (5) mtorch.Size ([1,5,5]) tensor ([[1,5,5]) tensor 1, 1.], [0, 0, 0, 1, 1.], [1, 1, 1, 1, 1.], [1, 1, 1, 1.]]) tensor ([0.8000, 0.8000, 0.8000, 1.0000, 1.0000])
For four-dimensional data, channel defaults to the same input
Input=torch.randn m=F.avg_pool2d (input, (4pr 4)) print (m.size ()) torch.Size ([10Jing 3,1,1])
Supplement: parsing of AdaptiveAvgPool function in PyTorch
Adaptive pooling (AdaptiveAvgPool1d):
For the input signal, provide 1-dimensional adaptive average pooling operation for any input size, the output size can be specified as Hauw, but the number of input and output features will not change.
Torch.nn.AdaptiveAvgPool1d (output_size) # output_size: output Siz
For the input signal, provide 1-dimensional adaptive average pooling operation for any input size, the output size can be specified as Hauw, but the number of input and output features will not change.
# target output size of 5m = nn.AdaptiveAvgPool1d (5) input = autograd.Variable (torch.randn (1,64,8)) output = m (input) adaptive pooling (AdaptiveAvgPool2d): class torch.nn.AdaptiveAvgPool2d (output_size)
For input signals, provide two-dimensional adaptive average pooling operation for any input size, the output size can be specified as Hauw, but the number of input and output features will not change.
Parameters:
Output_size: the size of the output signal, which can be represented by (HMagneW) or the delay digit H, which can be used to denote the output of Hidden
# target output size of 5x7m = nn.AdaptiveAvgPool2d ((5p7)) input = autograd.Variable (torch.randn (1,64,8,9)) # target output size of 7x7 (square) m = nn.AdaptiveAvgPool2d (7) input = autograd.Variable (torch.randn (1,64,10,9)) output = m (input) above is all the contents of this article entitled "what's the difference between F.avg_pool1d () and F.avg_pool2d () in pytorch". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.