Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to set random number seeds in PyTorch so that the results can be reproduced

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces PyTorch how to set random number seeds so that the results can be reproduced, with a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.

Because there are a large number of random operations in the process of model training, the results are inconsistent for the same code. Therefore, in order to get repeatable experimental results, we need to set a fixed seed for the random number generator. In this way, we get the random number seeds whose results can be reproduced.

CUDNN

The convolution operation is optimized in cudnn at the expense of accuracy for computational efficiency. If you need to ensure repeatability, you can use the following settings:

From torch.backends import cudnncudnn.benchmark = False # if benchmark=True, deterministic will be Falsecudnn.deterministic = True

However, in fact, this setting has little effect on the accuracy, only the difference of a few places after the decimal point. Therefore, if the accuracy is not very high, in fact, it is not recommended to modify, because it will reduce the computational efficiency.

Pytorchtorch.manual_seed (seed) # set random seed torch.cuda.manual_seed (seed) for CPU # set random seed torch.cuda.manual_seed_all (seed) for current GPU # set random seed Python & Numpy for all GPU

If random preprocessing is used in the process of reading data (such as RandomCrop, RandomHorizontalFlip, etc.), then the random number generator for python and numpy also needs to set the seed.

Import randomimport numpy as nprandom.seed (seed) np.random.seed (seed) Dataloader

If dataloader uses multithreading (num_workers > 1), the final run result will be different due to the different order in which the data is read.

In other words, changing the num_workers parameters will also affect the experimental results.

At present, no solution has been found to solve this problem, but as long as the number of num_workers (the number of threads) is fixed, we can basically repeat the experimental results.

Add: pytorch fixed random number seed stepped on the pit

1. Preliminary fixed def setup_seed (seed): torch.manual_seed (seed) torch.cuda.manual_seed_all (seed) torch.cuda.manual_seed (seed) np.random.seed (seed) random.seed (seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.enabled = False torch.backends.cudnn.benchmark = False # torch.backends.cudnn.benchmark = True # for accelerating the running setup_seed (2019) 2. Continue to add the following code: tensor_dataset = ImageList (opt.training_list,transform) def _ init_fn (worker_id): random.seed (10 + worker_id) np.random.seed (10 + worker_id) torch.manual_seed (10 + worker_id) torch.cuda.manual_seed (10 + worker_id) torch.cuda.manual_seed_all (10 + worker_id) dataloader = DataLoader (tensor_dataset Batch_size=opt.batchSize, shuffle=True, num_workers=opt.workers, worker_init_fn=_init_fn) 3. After the above operation, it is found that most of the loaded data are consistent in many experiments.

However, some data are still inconsistent, and it was later found that it was the problem with the pytorch version. Upgrade the original 0.3.1 version to 1.1.0 version, and the problem was solved.

4. Although the problem has been solved according to the above operation

However, because cudnn.benchmark is set to False, the running speed slows down to the original 1x3, so continue to explore, and the final solution is to change step 1 to the following, while placing this part of the code at the beginning of the main program as much as possible, for example:

Import torchimport torch.nn as nnfrom torch.nn import initimport pdbimport torch.nn.parallelimport torch.nn.functional as Fimport torch.backends.cudnn as cudnnimport torch.optim as optimimport torch.utils.datafrom torch.utils.data import DataLoader, Datasetimport sysgpu_id = "3 GPU 2" os.environ ["CUDA_VISIBLE_DEVICES"] = gpu_idprint ('GPU:' Gpu_id) def setup_seed (seed): torch.manual_seed (seed) torch.cuda.manual_seed_all (seed) torch.cuda.manual_seed (seed) np.random.seed (seed) random.seed (seed) cudnn.deterministic = True # cudnn.benchmark = False # cudnn.enabled = Falsesetup_seed (2019) Thank you for reading this article carefully I hope the article "how to set up random number seeds in PyTorch so that the results can be reproduced" shared by the editor is helpful to everyone. At the same time, I also hope that you can support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report