Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to convert numpy and torch data types

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the relevant knowledge of "how to convert numpy and torch data types". The editor shows you the operation process through an actual case. The operation method is simple, fast and practical. I hope this article "how to convert numpy and torch data types" can help you solve the problem.

Numpy data type conversion

Numpy uses astype to convert data types. Float is converted to 64 bits by default, which can be specified as 32 bits using np.float32.

# numpy transforms float type a = np.array ([1mem2myr3]) a = a.astype (np.float) print (a) print (a.dtype)

[1. 2. 3.]

Float64

Do not use a.dtype to specify the data type, which will result in data loss

# numpy converts float type b = np.array ([1mem2jue 3]) b.dtype = np.float32print (b) print (b.dtype)

[1.e-45 3.e-45 4.e-45]

Float32

Do not use float instead of np.float, or unexpected errors may occur

Unable to convert np.float32 from np. float 64 bit, an error will be reported

Np.float64 is multiplied by np.float32, and the result is np.float64

In actual use, you can specify np.float or a specific number of digits, such as np.float, but it is more convenient to specify np.float directly.

Torch data type conversion

Torch uses torch.float () to convert data types. Float is converted to 32 bits by default. There is no torch.float64 () method in torch.

# torch converts float type b = torch.tensor ([4Jing 5jue 6]) b = b.float () b.dtypetorch.float32

Np.float64 is also 64-bit after it is converted to torch using torch.from_numpy

Print (a.dtype) c = torch.from_numpy (a) c.dtype

Float64

Torch.float64

Do not use float instead of torch.float, or unexpected errors may occur

There will be errors when multiplying torch.float32 with torch.float64 data types, so be careful to specify or convert data float specific types when multiplying.

The general principle of np and torch data type conversion is the same, only when multiplying, torch.float inconsistencies can not be multiplied, np.float inconsistencies can be multiplied, and converted to np.float64

Numpy and tensor transform each other

Convert tensor to numpy

Import torchb = torch.tensor ([4.0 print 6]) # b = b.float () print (b.dtype) c = b.numpy () print (c.dtype)

Torch.int64

Int64

Convert numpy to tensor

Import torchimport numpy as npb= np.array ([1, np.float, 2, 3]) # b = b.astype (np.float) print (b.dtype) c = torch.from_numpy (b) print (c.dtype)

Int32

Torch.int32

As you can see, the default int of torch is 64-bit, and the default int of numpy is 32-bit

Supplement: torch.from_numpy VS torch.Tensor

Recently, when I was making dataset, I suddenly found that when I converted the input image to tensor, I could use torch.Tensor to directly force the transformation of the numpy class into the tensor class, or I could use the torch.from_numpy method to convert the numpy class into the tensor class. So, what is the difference between torch.Tensor and torch.from_numpy? Since torch.Tensor can handle it, isn't it redundant to keep torch.from_numpy?

Answer

The difference is that it is safer to use torch.from_numpy, and using tensor.Tensor is not as expected under non-float types.

explain

In fact, the difference between the two is big. For example, torch.Tensor is like int,torch.from_numpy of c is like static_cast of C++. We all know that if int64 is forcibly converted to int32, as long as it is from high to low, there will be hidden dangers that high positions will be erased, not only may precision be lost, but also positive and negative adjustments will be made.

There will be the same problem with torch.Tensor and torch.from_numpy.

Take a look at the torch.Tensor documentation, which clearly states

Torch.Tensor is an alias for the default tensor type (torch.FloatTensor).

And the torch.from_numpy document is a description.

The returned tensor and ndarray share the same memory. Modifications to the tensor will be reflected in the ndarray and vice versa. The returned tensor is not resizable.

In other words,

1. When the source of the conversion is float, torch.Tensor and torch.from_ Numpy will share a piece of memory! And the type of the converted result is torch.float32

2. When the source of the conversion is not a float type, torch.Tensor gets torch.float32, while torch.from_numpy is consistent with the source type!

Isn't that amazing? Here is a simple example: import torchimport numpy as nps1 = np.arange (10 Dtype=np.float32) S2 = np.arange (10) # the default dtype is the int64# example-o11 = torch.Tensor (S1) o12 = torch.from_numpy (S1) o11.dtype # torch.float32o12.dtype # torch.float32# modified value o11 [0] = 12o12 [0] # tensor (12.) # example O21 = torch.Tensor (S2) o22 = torch.from_numpy (S2) o21.dtype # torch.float32o22.dtype # torch.int64# modified value o21 [0] ] = 12o22 [0] # tensor (0) that's all for "how to convert numpy and torch data types" Thank you for your reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report