In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Today, I will talk to you about how to use Softmax and LogSoftmax in Pytorch. Many people may not know much about it. In order to make you understand better, the editor has summarized the following for you. I hope you can get something according to this article.
First, function interpretation
The common use of the 1.Softmax function is to specify the parameter dim:
(1) dim=0: softmax all elements of each column and make the sum of all elements of each column equal to 1.
(2) dim=1: softmax all elements of each row and make the sum of all elements of each row equal to 1.
Class Softmax (Module): r "Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0jue 1] and sum to 1. Softmax is defined as:. Math::\ text {Softmax} (x _ {I}) =\ frac {\ sum_j\ exp (xfantj)} Shape:-Input:: math: `(*) `where` * `means, any number of additional dimensions-Output:: math: `(*)`, same shape as the input Returns: a Tensor of the same dimension and shape as the input with values in the range [0 1] Arguments: dim (int): A dimension along which Softmax will be computed (so every slice along dim will sum to 1). .. Note:: This module doesn't work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use `LogSoftmax` instead (it's faster and has better numerical properties). Examples:: > m = nn.Softmax (dim=1) > input = torch.randn (2,3) > output = m (input) "" _ _ constants__ = ['dim'] def _ _ init__ (self, dim=None): super (Softmax, self). _ _ init__ () self.dim = dim def _ setstate__ (self) State): self.__dict__.update (state) if not hasattr (self, 'dim'): self.dim = None def forward (self, input): return F.softmax (input, self.dim, _ stacklevel=5) def extra_repr (self): return' dim= {dim} '.format (dim=self.dim)
2.LogSoftmax is actually log the result of softmax, that is, Log (Softmax (x)).
Class LogSoftmax (Module): r "Applies the: math: `\ log (\ text {Softmax} (x)) `function to an n-dimensional input Tensor. The LogSoftmax formulation can be simplified as:.. Math::\ text {LogSoftmax} (x _ {I}) =\ log\ left (\ frac {\ exp (xfanti)} {\ sum_j\ exp (xfantj)}\ right) Shape:-Input:: math: `(*) `where` * `means, any number of additional dimensions-Output:: math: `(*)` Same shape as the input Arguments: dim (int): A dimension along which LogSoftmax will be computed. Returns: a Tensor of the same dimension and shape as the input with values in the range [- inf, 0) Examples:: > > m = nn.LogSoftmax () > input = torch.randn (2,3) > output = m (input) "" _ _ constants__ = ['dim'] def _ init__ (self, dim=None): super (LogSoftmax) Self). _ init__ () self.dim = dim def _ setstate__ (self, state): self.__dict__.update (state) if not hasattr (self, 'dim'): self.dim = None def forward (self, input): return F.log_softmax (input, self.dim, _ stacklevel=5) II. Code example
Enter code
Import torchimport torch.nn as nnimport numpy as npbatch_size = 4class_num = 6inputs = torch.randn (batch_size, class_num) for i in range (batch_size): for j in range (class_num): inputs [I] [j] = (I + 1) * (j + 1) print ("inputs:", inputs)
Get a vector whose size batch _ size is 4 and the number of categories is 6 (which can be understood as getting through the last layer)
Tensor ([1, 2, 3, 4, 5, 6.]
[2., 4., 6., 8., 10., 12.]
[3., 6., 9., 12., 15., 18.]
[4.8.12.16.20.24])
Then we Softmax each line of the vector
Softmax = nn.Softmax (dim=1) probs = Softmax (inputs) print ("probs:\ n", probs)
Get
Tensor ([4.2698e-03, 1.1606e-02, 3.1550e-02, 8.5761e-02, 2.3312e-01, 6.3369e-01]
[3.9256e-05, 2.9006e-04, 2.1433e-03, 1.5837e-02, 1.1702e-01, 8.6467e-01]
[2.9067e-07, 5.8383e-06, 1.1727e-04, 2.3553e-03, 4.7308e-02, 9.5021e-01]
[2.0234e-09, 1.1047e-07, 6.0317e-06, 3.2932e-04, 1.7980e-02, 9.8168e-01])
In addition, we LogSoftmax each line of the vector
LogSoftmax = nn.LogSoftmax (dim=1) log_probs = LogSoftmax (inputs) print ("log_probs:\ n", log_probs)
Get
Tensor ([[- 5.4562e+00,-4.4562e+00,-3.4562e+00,-2.4562e+00,-1.4562e+00,-4.5619e-01]
[- 1.0145e+01,-8.1454e+00,-6.1454e+00,-4.1454e+00,-2.1454e+00,-1.4541e-01]
[- 1.5051e+01,-1.2051e+01,-9.0511e+00,-6.0511e+00,-3.0511e+00,-5.1069e-02]
[- 2.0018e+01,-1.6018e+01,-1.2018e+01,-8.0185e+00,-4.0185e+00,-1.8485e-02])
Verify that each line element and is 1
# probs_sum in dim=1probs_sum = [0 for i in range (batch_size)] for i in range (batch_size): for j in range (class_num): probs_ [I] + = probs [I] [j] print (I, "row probs sum:", probs_ [I])
Get the sum of each line and see that it is indeed 1
0 row probs sum: tensor (1.
1 row probs sum: tensor (1.0000)
2 row probs sum: tensor (1.)
3 row probs sum: tensor (1.)
Verify that LogSoftmax is Log the result of Softmax
# to numpynp_probs = probs.data.numpy () print ("numpy probs:\ n", np_probs) # np.log () log_np_probs = np.log (np_probs) print ("log numpy probs:\ n", log_np_probs)
Get
Numpy probs:
[[4.26977826e-03 1.16064614e-02 3.15496325e-02 8.57607946e-02 2.33122006e-01 6.33691311e-01]
[3.92559559e-05 2.90064461e-04 2.14330270e-03 1.58369839e-02 1.17020354e-01 8.64669979e-01]
[2.90672347e-07 5.83831024e-06 1.17265590e-04 2.35534250e-03 4.73083146e-02 9.50212955e-01]
[2.02340233e-09 1.10474026e-07 6.03167746e-06 3.29318427e-04 1.79801770e-02 9.81684387e-01]]
Log numpy probs:
[- 5.4561934e+00-4.4561934e+00-3.4561934e+00-2.4561932e+00-1.4561933e+00-4.5619333e-01]
[- 1.0145408e+01-8.1454077e+00-6.1454072e+00-4.1454072e+00-2.1454074e+00-1.4540738e-01]
[- 1.5051069e+01-1.2051069e+01-9.0510693e+00-6.0510693e+00-3.0510693e+00-5.1069155e-02]
[- 2.0018486e+01-1.6018486e+01-1.2018485e+01-8.0184851e+00-4.0184855e+00-1.8485421e-02]
Verification complete
3. The whole code import torchimport torch.nn as nnimport numpy as npbatch_size = 4class_num = 6inputs = torch.randn (batch_size, class_num) for i in range (batch_size): for j in range (class_num): inputs [I] [j] = (I + 1) * (j + 1) print ("inputs:", inputs) Softmax = nn.Softmax (dim=1) probs = Softmax (inputs) print ("probs:\ n" Probs) LogSoftmax = nn.LogSoftmax (dim=1) log_probs = LogSoftmax (inputs) print ("log_probs:\ n", log_probs) # probs_sum in dim=1probs_sum = [0 for i in range (batch_size)] for i in range (batch_size): for j in range (class_num): probs_ [I] + = probs [I] [j] print (I, "row probs sum:" Probs_ [I]) # to numpynp_probs = probs.data.numpy () print ("numpy probs:\ n", np_probs) # np.log () log_np_probs = np.log (np_probs) print ("log numpy probs:\ n", log_np_probs) finish reading the above Do you have any further understanding of how to use Softmax and LogSoftmax in Pytorch? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.