In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to use the numel function in pytorch. It is very detailed and has a certain reference value. Friends who are interested must finish it!
There is a numel function in pytorch. From this function, we can know how many elements are contained in the tensor.
Get the total number of elements in the tensor import torchx = torch.randn (3) print ("number elements of x is", x.numel ()) y = torch.randn (3) 10) print ("number elements of y is", y.numel ())
Output:
Number elements of x is 9
Number elements of y is 150
How many elements or variables are there in bits 27 and 150 respectively
Add: the use of pytorch to obtain the number of tensor elements numel ()
Numel is the abbreviation of "number of elements".
Numel () can directly return the number of elements of type int import torch a = torch.randn (1,2,3,4) b = a.numel () print (type (b)) # intprint (b) # 24
Through the numel () function, we can quickly see how many elements a tensor has.
Supplement: pytorch convolution structure and numel () function
Take a look at the code ~ from torch import nn class CNN (nn.Module): def _ init__ (self, num_channels=1, dong56, sq12ml4): super (CNN, self). _ _ init__ () self.first_part = nn.Sequential (nn.Conv2d (num_channels, d, kernel_size=3, padding=5//2), nn.Conv2d (num_channels, d, kernel_size= (1Magol 3), padding=5//2) Nn.Conv2d (num_channels, d, kernel_size= (3jue 1), padding=5//2), nn.PReLU (d)) def forward (self, x): X = self.first_part (x) return x model = CNN () for m in model.first_part: if isinstance (m, nn.Conv2d): # print ('mlvarejigm.weight.data) print (' MVOR' M.weight.data [0]) print ('mGroupl m.weight.data [0] [0]) print (' mVOREGRAPHY] m.weight.data.numel () # numel () calculates the number of elements in the matrix. Result: M: tensor ([[- 0.2822, 0.0128,-0.0244], [- 0.2329, 0.1037, 0.2262], [0.2845]) -0.3094, 0.1443]]) # convolution kernel size is 3x3m: tensor ([[- 0.2822, 0.0128,-0.0244], [- 0.2329, 0.1037, 0.2262], [0.2845,-0.3094, 0.1443]]) # convolution kernel size is 3x3m: 504 # = 56 x (3x3) output channels is 56 The convolution kernel size is 3x3m: tensor ([- 0.0335, 0.2945, 0.2512, 0.2770, 0.2071, 0.1133,-0.1883, 0.2738, 0.0805, 0.1339,-0.3000,-0.1911,-0.1760,0.2855,-0.0234,-0.0843,0.1815,0.2357,0.2758,0.2689,-0.2477). -0.2528,-0.1447,-0.0903, 0.1870, 0.0945,-0.2786,-0.0419, 0.1577,-0.3100,-0.1335,-0.3162,-0.1570,0.3080,0.0951,0.1953,0.1814,-0.1936,0.1466,0.2911,0.1286,0.3024,0.1143 -0.0726,-0.2694,-0.3230, 0.2031,-0.2963, 0.2965, 0.2525,-0.2674, 0.0564,-0.3277, 0.2185,-0.0476,0.0558]) values of bias bias m: tensor ([0.5747,-0.3421,0.2847]) convolution kernel size is 1x3m: tensor ([[0.5747,-0.3421]) 0.2847]]) the convolution kernel size is 1x3m: 168 # = 56 x (1x3) the number of output channels is 56 The convolution kernel size is 1x3m: tensor ([0.5328,-0.5711,-0.1945, 0.2844, 0.2012,-0.0084, 0.4834,-0.2020,-0.0941, 0.4683,-0.2386,0.2781,-0.1812,-0.2990,-0.4652,0.1228,-0.0627,0.3112,-0.2700,0.0825,0.4345) -0.0373,-0.3220,-0.5038,-0.3166,-0.3823, 0.3947,-0.3232, 0.1028, 0.2378, 0.4589, 0.1675,-0.3112,-0.0905,-0.0705,0.2763,0.5433,0.2768,-0.3804,0.4855,-0.4880,-0.4555,0.4143 0.5474, 0.3305,-0.0381, 0.2483, 0.5133,-0.3978, 0.0407, 0.2351, 0.1910,-0.5385, 0.1340,0.1811,0.3008]) the value m: tensor ([0.5474], [0.2483], [0.1894]]) convolution kernel size is 3x1m: tensor ([[0.0184]) [0.0981], [0.1894]]) the convolution kernel size is 3x1m: 168 # = 56 x (3x1) the number of output channels is 56 The convolution kernel size is 3x1m: tensor ([- 0.2951,-0.4475, 0.1301, 0.4747,-0.0512, 0.2190, 0.3533,-0.1158, 0.2237,-0.1407,-0.4756, 0.1637,-0.4555,-0.2157,0.0577,-0.3366,-0.3252,0.2807,0.1660,0.2949,-0.2886) -0.5216, 0.1665, 0.2193, 0.2038,-0.1357, 0.2626, 0.2036, 0.3255, 0.2756, 0.1283,-0.4909, 0.5737,-0.4322,-0.4930,-0.0846, 0.2158, 0.5565, 0.3751,-0.3775,-0.5096,-0.4520,0.2246 -0.5367, 0.5531, 0.3372,-0.5593,-0.2780,-0.5453,-0.2863, 0.5712,-0.2882, 0.4788, 0.3222,-0.4846,0.2170]) values of bias offset''after initialization' class CNN (nn.Module): def _ init__ (self, num_channels=1, dong56, slug 12, mx4): super (CNN) Self). _ init__ () self.first_part = nn.Sequential (nn.Conv2d (num_channels, d, kernel_size=3, padding=5//2), nn.Conv2d (num_channels, d, kernel_size= (1), padding=5//2), nn.Conv2d (num_channels, d, kernel_size= (3), padding=5//2) Nn.PReLU (d) self._initialize_weights () def _ initialize_weights (self): for m in self.first_part: if isinstance (m, nn.Conv2d): nn.init.normal_ (m.weight.data, mean=0.0) Std=math.sqrt (2 / (m.out_channels*m.weight.data [0] [0]. Numel () nn.init.zeros_ (m.bias.data) def forward (self, x): X = self.first_part (x) return x model = CNN () for m in model.first_part: if isinstance (m, nn.Conv2d): # print ('MVO' M.weight.data) print ('mVOREGRAPHY] m.weight.data [0]) print (' mVOREGRAPHY] m.weight.data [0]) print ('mVOREDATA. Numel ()) # numel () calculates the number of elements in the matrix as follows: M: tensor ([[- 0.0284,-0.0585, 0.0271], [0.0125, 0.0554]) 0.0511], [- 0.0106, 0.0574,-0.0053]]) m: tensor ([[- 0.0284,-0.0585, 0.0271], [0.0125, 0.0554, 0.0511], [- 0.0106,0.0574,-0.0053]]) m: 504m: tensor ([0, 0, 0, 0, 0, 0. 0., 0., 0., 0. 0, 0, 0, 0, 0.]) m: tensor ([0.0059, 0.0465,-0.0725]) m: tensor ([[0.0059, 0.0465,-0.0725]]) m: 168m: tensor ([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0. 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) m: tensor ([[0.0599]) [- 0.1330], [0.2456]]) m: tensor ([[0.0599], [- 0.1330], [0.2456]]) m: 168m: tensor ([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0. 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.00]) The above is all the contents of the article "how to use the numel function in pytorch". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.