Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the differences between to (device) and cuda () in pytorch

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "what is the difference between to(device) and cuda() in pytorch". The content is simple and easy to understand and the organization is clear. I hope it can help you solve your doubts. Let Xiaobian lead you to study and learn "what is the difference between to(device) and cuda() in pytorch".

Principle.to(device) can specify CPU or GPU device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") #Single GPU or CPUmodel.to(device)#if multiple GPU if torch.cuda.device_count() > 1: model = nn.DataParallel (model, device_ids=[0,1,2])model.to(device).cuda() can only specify GPU#Specify a GPU.environ <$'CUDA_VISIBLE_DEVICE']='1'model.cuda()#if multiple GPUs.environment <$'CUDA_VISIBLE_DEVICE']= '0,1,2,3'device_ids = [0,1, 2,3]net = torch.nn.Dataparallel(net, device_ids =device_ids)net = torch.nn.Dataparallel(net) #Default uses all device_ids net = net.cuda()class DataParallel(Module): def __init__(self, module, device_ids=None, output_device=None, dim=0): super(DataParallel, self).__ init__() if not torch.cuda.is_available(): self.module = module self.device_ids = [] return if device_ids is None: device_ids = list(range(torch.cuda.device_count())) if output_device is None: output_device = device_ids[0]

Supplement: Pytorch uses the To method to write code that is device-agnostic on different devices (CUDA/CPU)

Previous versions of PyTorch were very difficult to write device-diagnostic code (i.e., to run on devices that CUDA could or could only use CPU without modifying the code).

The device-diagnostic concept

That is, device-independent, which can be understood as any device can run the code you write. (PS: Personal understanding, I didn't find professional explanation online)

PyTorch 0.4.0 makes code compatible

PyTorch 0.4.0 makes code compatibility very easy in two ways:

The device property of a tensor provides a torch.device device for all tensors. (Note: get_device applies only to CUDA tensors)

The to methods Tensors and Modules can be used to easily move objects to different devices (instead of the previous cpu() or cuda() methods)

We recommend the following models:

#Start script, create a tensor device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")...# But whether you get a new Tensor or Module#if they are already on the target device then the copy operation will not be performed input = data.to (device)model = MyModule(...). To(device) The above is "What is the difference between to(device) and cuda() in pytorch" All the contents of this article, thank you for reading! I believe that everyone has a certain understanding, hope to share the content to help everyone, if you still want to learn more knowledge, welcome to pay attention to the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report