In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article focuses on how Pytorch implements variable type conversion. Has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand.
Pytorch variable types are various Tensor,Tensor that can be understood as high-dimensional matrices. Pytorch variable type conversion is actually a conversion between matrices, so we really didn't do pytorch variable type conversion? Please continue to watch:
Similar to Array in Numpy. Tensor in Pytorch also includes data types on CPU and data types on GPU. Generally, Tensor on GPU is obtained by Tensor plus cuda () function on CPU. You can view variable types by using the Type function.
The general system defaults to the torch.FloatTensor type.
For example, data = torch.Tensor (2 torch.cuda.FloatTensor 3) is a tensor of 2 to 3. If the type is FloatTensor; data.cuda (), it is converted to the tensor type of GPU, the tensor type.
Here is a brief introduction to the conversion between variables in Pytorch
(1) conversion between CPU or GPU tensors
Generally speaking, as long as long (), int (), double (), float (), byte () and other functions are added after Tensor, Tensor can be converted to type.
For example: Torch.LongTensor--- > Torch.FloatTensor, just use data.float () directly.
You can also use the type () function, where data is the Tensor data type, data.type () is the type that gives data, and if data.type (torch.FloatTensor) is used, it is cast to the torch.FloatTensor type tensor.
When you don't know what type to convert, but you need to ask for the product of two tensors, you can use a1.type_as (a2) to convert A1 to A2 of the same type.
(2) CPU tensor-> GPU tensor, using data.cuda ()
(3) GPU tensor-> CPU tensor uses data.cpu ()
(4) when the Variable variable is converted into an ordinary Tensor, it can be understood that Variable is a Wrapper, and the data in it is Tensor. If Var is a Variable variable, use Var.data to get the Tensor variable
(5) conversion between Tensor and Numpy Array
Tensor---- > Numpy can use data.numpy (), and data is the Tensor variable
Numpy-> Tensor can use torch.from_numpy (data), and data is the numpy variable
Supplement: data types and forced type conversions of Numpy/Pytorch
1. Introduction to data types Numpy
NumPy supports a wider variety of numeric types than Python. The following table shows the different scalar data types defined in NumPy.
Ordinal data type and description 1.Bool.Bool2.int _ default integer stored as a byte Boolean value (true or false) 2.int _ default integer equivalent to C long, usually int32 or int643.intc equivalent to C int, usually int32 or int644.intp integer used for indexing, equivalent to C size_t Usually int32 or int645.int8 bytes 6.int1616 bit integers (- 32768 ~ 32767) 7.int3232 bit integers (- 2147483648 ~ 2147483647) 8.int6464 bit integers (- 9223372036854775808 ~ 9223372036854775807) 9.uint88 bit unsigned integers (0 ~ 255) 10.uint1616 bit unsigned integers (0 ~ 65535) 11.uint3232 bit unsigned integers (0 ~ 4294967295) 12.uint6464 bit unsigned integers (0 ~ 18446744073709551615) 14.float16 semi-precision floating point for 13.float_float64 5-bit exponent, 10-bit Mantissa 15.float32 single-precision floating point: sign bit, 8-bit index, 23-bit Mantissa 16.float64 double-precision floating point: sign bit, 11-bit index, abbreviated 18.complex64 complex number of 52-bit Mantissa 17.complex_complex128, represented by two 32-bit floating points (real and imaginary) 19.
Complex128 complex, represented by two 64-bit floating points (real and imaginary)
Using the type name directly is likely to report an error, and the correct way to use it is np. Call, eg, np.uint8
Pytorch
Torch defines seven types of CPU tensors and eight types of GPU tensors. Here we will only explain the ones in CPU. In fact, GPU only needs to add a cuda in the middle, such as torch.cuda.FloatTensor:
Torch.FloatTensor (2 ~ 3) constructs a tensor of type 2 ~ 3 Float.
Torch.DoubleTensor (2 ~ 3) constructs a tensor of type 2 ~ 3 Double.
Torch.ByteTensor (2 ~ 3) constructs a tensor of type 2 ~ 3 Byte.
Torch.CharTensor (2 ~ 3) constructs a tensor of type 2 ~ 3 Char.
Torch.ShortTensor (2 ~ 3) constructs a tensor of type 2 ~ 3 Short.
Torch.IntTensor (2 ~ 3) constructs a tensor of type 2 ~ 3 Int.
Torch.LongTensor (2 ~ 3) constructs a tensor of type 2 ~ 3 Long.
Similarly, using the type name directly is likely to report an error, and the correct way to use it is torch. Call, eg,torch.FloatTensor ()
The type () function of 2.Python
The type function can be called by a variable or passed in as an argument.
The type of the variable is returned, not the data type.
Data = np.random.randint (0,255,300) print (type (data))
Output
Dtype property of 3.Numpy/Pytorch
Return value is the data type of the variable
T_out = torch.Tensor (1 and 2) print (t_out.dtype)
Output
Torch.float32
T_out = torch.Tensor (1, 2, 3)
Print (t_out.numpy () .dtype)
Output
Float32
Type conversion in 4.Numpy first talk about why I use this function (do not skip)
To implement the trochvision.transforms.ToPILImage () function
So I want to switch from the ndarray type of numpy to the PILImage type
I made the following attempts
Data = np.random.randint (0,255,300) n_out = data.reshape print (n_out.dtype) img = transforms.ToPILImage () (n_out) img.show ()
But unfortunately, the report was wrong.
Raise TypeError ('Input type {} is not supported'.format (npimg.dtype))
TypeError: Input type int32 is not supported
Because to convert ndarray to PILImage requires ndarray to be of type uint8.
So I threw in the towel.
Used
N_out = np.linspace (0,255jpdtypewriter np.uint8) n_out = n_out.reshape (10meme 10meme 3) print (n_out.dtype) img = torchvision.transforms.ToPILImage () (n_out) img.show ()
Got the output.
Uint8
Well, it shows a picture.
However, it is very suffocating, and the effect is not the same as the random number you want.
So I used the astype function.
Astype () function
Called by the variable, but the direct call will not change the data type of the original variable, the return value is the new variable after the change of type, so assign the value back.
N_out = n_out.astype (np.uint8) # initialize random number seed np.random.seed (0) data = np.random.randint (0,255,300) print (data.dtype) n_out = data.reshape (1010 np.uint8) # cast n_out = n_out.astype (np.uint8) print (n_out.dtype) img = transforms.ToPILImage () (n_out) img.show () output
Int32
Uint8
Type conversion in 5.Pytorch
There is no astype function in pytorch, so the correct conversion method is
Way1: variable calls directly to type tensor = torch.Tensor (3,5)
Torch.long () projects tensor to long type
Newtensor = tensor.long ()
Torch.half () projects tensor to a semi-precision floating-point type
Newtensor = tensor.half ()
Torch.int () projects the tensor to the int type
Newtensor = tensor.int ()
Torch.double () projects the tensor to the double type
Newtensor = tensor.double ()
Torch.float () projects the tensor to the float type
Newtensor = tensor.float ()
Torch.char () projects the tensor to the char type
Newtensor = tensor.char ()
Torch.byte () projects the tensor to the byte type
Newtensor = tensor.byte ()
Torch.short () projects the tensor to the short type
Newtensor = tensor.short ()
Similarly, like the astype function in numpy, the return value is the result of the change of type, and the type of variable called remains the same.
Way2: variable calls to the type function in pytorch
Type (new_type=None, async=False) returns the type if no new_type is provided, otherwise converts the object to the specified type. If it is already the correct type, it is not executed and the original object is returned.
The usage is as follows:
Self = torch.LongTensor (3,5) # convert to other types of print self.type (torch.FloatTensor) Way3: variable calls to the type_as function in pytorch
If the tensor is already of the correct type, the operation is not performed. The specific operation methods are as follows:
Self = torch.Tensor (3,5) tesnor = torch.IntTensor (2p3) print self.type_as (tesnor) Thank you for reading this article carefully. I hope the editor shares the "how does Pytorch achieve variable type conversion?" This article is helpful to everyone. At the same time, I hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.