In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "what are the reasons for using float64 training in pytorch". The content is easy to understand and clear. I hope it can help you solve your doubts. Let me lead you to study and learn this article "what are the reasons why pytorch uses float64 training?"
First of all, we need to know that pytorch uses a single-precision float32 training model by default.
The reason is:
Using float16 training model, the model effect will be lost, while using double (float64) will have twice the memory pressure, and will not bring much accuracy improvement.
Recently, I encountered the need to use double data type to train the model, and the specific implementation needs to set all the weight parameter data type and input data type of the model to torch.float64.
You can easily convert model parameters to float64 using a function of torch
Torch.set_default_dtype (torch.float64)
The input type can be used
Tensor.type (torch.float64)
Add: the essential difference between float32 and float64
First, we need to know what bits and bytes are.
Bits: called digit bytes: a simple byte number is the relationship between MB and G.
So, 8bits=1bytes,
So what's the difference between float32 and float64?
The difference between 32 and 64 bits in memory, that is, the higher the 4bytes or 8bytes digits, the higher the precision of floating-point numbers, which will affect the computational efficiency of deep learning.
Float64 uses twice as much memory as float32 and four times as much memory as float16
For example, for CIFAR10 data sets, if you use float64 to represent them, you need 60000 "32" 32 "3" 8 + 1024 "3" 1.4G, and it takes 1.4G to just call the dataset into memory.
If you use float32, you only need 0.7G, and if you use float16, you only need about 0.35G.
The amount of memory consumed will have a serious impact on the running efficiency of the system. (therefore, dataset files use uint8 to store data and keep files to a minimum.)
These are all the contents of this article entitled "what are the reasons for pytorch to use float64 training?" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.