In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "what are the multiplication operations commonly used in pytorch". In the daily operation, I believe that many people have doubts about the multiplication operations commonly used in pytorch. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "what multiplication operations are commonly used in pytorch". Next, please follow the editor to study!
The summary is in front of it:
Torch.mm: multiplication of two matrices (excluding vectors). For example, the dimensions are multiplied by (l _ (m)) and (m _ () ~ n).
Torch.bmm: used for multiplication of 3D vectors with batch. For example, the dimensions are multiplied by (brecast l.m) and (breco m).
Torch.mul: used for pixel-by-pixel multiplication of two identical dimensional matrices (dot multiplication). For example, the dimension is multiplied by (l.jue m) and (l.j.m).
Torch.mv: used for multiplication between a matrix and a vector (the matrix comes first and the vector comes after). For example, if the dimension is multiplied by (l) and (m), the resulting dimension is (l).
Torch.matmul: used for multiplying two tensors (dimensions of the last two dimensions satisfying matrix multiplication) or multiplication between matrices and vectors because of its broadcast mechanism (broadcasting, automatic replenishment of dimensions). For example, the dimensions are: (brecinct m) and (brecore m rect n); (lrect m) and (breco m mreco n); (bmeme c rect m m) and (bforce c m recorder n); (lpm) and (m) multiply and so on. [its role includes torch.mm, torch.bmm and torch.mv]
The @ operator: it works like torch.matmul.
* operator: it works like torch.mul.
1. Torch.mmimport torcha = torch.ones (1,2) print (a) b = torch.ones (2,3) print (b) output = torch.mm (a, b) print (output) print (output.size ()) "" tensor ([1.1,1]]) tensor ([[1.1,1.1.], [1.1.1.1]) tensor ([[2.2,2.2.]]) torch.Size ([1.]) Torch.bmma = torch.randn (2,1,2) print (a) b = torch.randn (2,2,3) print (b) output = torch.bmm (a, b) print (output) print (output.size ()) "tensor ([- 0.1187, 0.2110], [[0.7463,-0.6136]) tensor ([[- 0.1186, 1.5565, 1.3662]) [1.0199, 2.4644, 1.1630], [[- 1.9483,-1.6258,-0.4654], [- 0.1424, 1.3892, 0.7559]) tensor ([0.2293, 0.3352,0.0832], [[- 1.3666,-2.0657,-0.8111]) torch.Size ([2,1]) 3) "" 3, torch.mula = torch.ones (2,3) * 2print (a) b = torch.randn (2,3) print (b) output = torch.mul (a, b) print (output) print (output.size ()) "" tensor ([[2.2,2.2,2.], [2.2.2.2.]) tensor ([- 0.1187, 0.2110, 0.7463], [- 0.6136,-0.1186] 1.5565]]) tensor ([[- 0.2375, 0.4220, 1.4925], [- 1.2271,-0.2371, 3.1130]]) torch.Size ([2,3]) "4, torch.mvmat = torch.randn (3,4) print (mat) vec = torch.randn (4) print (vec) output = torch.mv (mat, vec) print (output) print (output.size ()) print (torch.mm (mat) Vec.unsqueeze (1)) .squeeze (1) "" tensor ([[- 0.1187, 0.2110, 0.7463,-0.6136], [- 0.1186, 1.5565, 1.3662, 1.0199], [2.4644, 1.1630,-1.9483,-1.6258]]) tensor ([- 0.4654,-0.1424,1.3892,0.7559]) tensor ([0.5982]) 2.5024,-5.2481]) torch.Size ([3]) tensor ([0.5982, 2.5024,-5.2481]) "5. The function of torch.matmul# includes torch.mm, torch.bmm and torch.mv. Other similar, not one by one examples. A = torch.randn (2,1,2) print (a) b = torch.randn (2,2,3) print (b) output = torch.bmm (a, b) print (output) output1 = torch.matmul (a, b) print (output1) print (output1.size ()) "tensor ([[- 0.1187, 0.2110], [0.7463,-0.6136]) tensor ([[- 0.1186, 1.5565, 1.3662]) [1.0199, 2.4644, 1.1630], [[- 1.9483,-1.6258,-0.4654], [- 0.1424, 1.3892, 0.7559]) tensor ([0.2293, 0.3352,0.0832], [[- 1.3666,-2.0657,-0.8111]) tensor ([[0.2293,0.3352]) 0.0832]], [[- 1.3666,-2.0657,-0.8111]) torch.Size ([2, 1, 3]) "" # Dimensions are (bforce 1, m) and (bforce m, m). (lrect m) and (breco m rep n); (breco creco lreco m) and (bre creco m r r n) A = torch.randn (2,3,4) b = torch.randn (2,4,5) print (torch.matmul (a, b). Size () a = torch.randn (3,4) b = torch.randn (2,4,5) print (torch.matmul (a, b). Size () a = torch.randn (2,3,3,4) b = torch.randn (2,3,4,5) print (torch.matmul (a) B) .size () a = torch.randn (2,3) b = torch.randn (3) print (torch.matmul (a, b). Size () "" torch.Size ([2,3,5]) torch.Size ([2,3,5]) torch.Size ([2,3,3,5]) torch.Size ([2]) "6, @ operator # @ operator: it works like torch.matmula = torch.randn (2,3) 4) b = torch.randn (2,4,5) print (torch.matmul (a, b). Size () print ((a @ b) .size () a = torch.randn (3,4) b = torch.randn (2,4,5) print (torch.matmul (a, b) .size () print ((a @ b) .size ()) a = torch.randn (2,3,3,4) b = torch.randn (2,3,4,5) print (torch.matmul (a) B) .size () print ((a @ b) .size ()) a = torch.randn (2,3) b = torch.randn (3) print (torch.matmul (a, b) .size () print ((a @ b). Size () "" torch.Size ([2,3,5]) torch.Size ([2,3,5]) 5]) torch.Size ([2,3,3,5]) torch.Size ([2]) torch.Size ([2]) "7, * operator # * operator: it works like torch.mula = torch.ones (2,3) * 2print (a) b = torch.ones (2,3) * 3print (b) output = torch.mul (a) B) print (output) print (output.size ()) output1 = a * bprint (output1) print (output1.size ()) "" tensor ([[2.2,2.2,2.], [2.2.2.2]) tensor ([[3.3.3.3.], [3.3.3.3.]) tensor ([[6.6,6.6.], [6.6,6.6.6.]) torch.Size ([2) 3]) tensor ([6, 6, 6.], [6, 6, 6.]) torch.Size ([2, 3]) "" attached: two-dimensional matrix multiplication
Neural network contains a large number of 2D tensor matrix multiplication operations, but the use of torch.matmul function is more complex, so PyTorch provides a more simple and convenient torch.mm (input, other, out = None) function. The following table is a simple comparison between the torch.matmul function and the torch.mm function.
The torch.matmul function supports broadcasting, which mainly means that when one of the two tensors participating in the matrix product operation is a 1D tensor, the torch.matmul function broadcasts it as a 2D tensor to participate in the operation, and finally deletes the added dimension of the broadcast as the return result of the final torch.matmul function. The torch.mm function does not support broadcasting, and the two tensors of the corresponding input must be 2D.
Import torchinput = torch.tensor ([[1, 2.], [3, 4.]) other = torch.tensor ([[5, 6, 7.], [8, 9, 10.]) result = torch.mm (input, other) print (result) # tensor ([[21.24.27], # [47.54.61]]) At this point, the study of "what are the multiplication operations commonly used in pytorch" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.