Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the difference between to (device) and map_location=device in pytorch

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the pytorch in to (device) and map_location=device what is the difference between the relevant knowledge, the content is detailed and easy to understand, simple and fast operation, with a certain reference value, I believe you read this pytorch in to (device) and map_location=device what the difference between the article will have a harvest, let's take a look at it.

I. brief introduction

Set the parameter in the map_location function to torch.load () to cuda:device_id. This loads the model into the given GPU device.

Call model.to (torch.device ('cuda')) to convert the parameter tensor of the model into CUDA tensor. No matter training on cpu or gpu, the saved model parameters are parameter tensor, not cuda tensor. Therefore, torch.to (torch.device ("cpu")) is not needed on cpu devices.

II. Examples

After understanding the significance of the two representatives, the following describes the use of the two.

1. Save it on GPU and load it on CPU

Save:

Torch.save (model.state_dict (), PATH)

Load:

Device = torch.device ('cpu') model = TheModelClass (* args, * * kwargs) model.load_state_dict (torch.load (PATH, map_location=device))

Explanation:

When loading a model on a CPU trained with GPU, pass torch.device ('cpu') to the torch.load () parameter in the map_location function, and use the map_location parameter to dynamically remap the memory under the tensor to the CPU device.

2. Save it on GPU and load it on GPU

Save:

Torch.save (model.state_dict (), PATH)

Load:

Device = torch.device ("cuda") model = TheModelClass (* args, * * kwargs) model.load_state_dict (torch.load (PATH)) model.to (device) # Make sure to call input = input.to (device) on any input tensors that you feed to the model

Explanation:

When training a model on GPU and saving it on GPU, simply transform the initial model model into a CUDA optimization model to model.to (torch.device ('cuda')).

In addition, be sure to. To (torch.device ('cuda')) use this feature on all model inputs to prepare model data.

Notice that the call to my_tensor.to (device) returns the new copy on my_tensorGPU.

It does not overwrite my_tensor.

Therefore, remember to override the tensor manually: my_tensor = my_tensor.to (torch.device ('cuda'))

3. Save in CPU and load on GPU

Save:

Torch.save (model.state_dict (), PATH)

Load:

Device = torch.device ("cuda") model = TheModelClass (* args, * * kwargs) model.load_state_dict (torch.load (PATH, map_location= "cuda:0")) # Choose whatever GPU device number you wantmodel.to (device) # Make sure to call input = input.to (device) on any input tensors that you feed to the model

Explanation:

When you load the model on a GPU that has been trained and saved on CPU, set the parameter in the map_location function to torch.load () to cuda:device_id.

This loads the model into the given GPU device.

Next, be sure to call model.to (torch.device ('cuda')) to convert the model's parameter tensor into a CUDA tensor.

Finally, make sure that .to (torch.device ('cuda')) uses this function on all model inputs to prepare data for the CUDA optimization model.

Notice that the call to my_tensor.to (device) returns the new copy on my_tensorGPU.

It does not overwrite my_tensor.

Therefore, remember to override the tensor manually: my_tensor = my_tensor.to (torch.device ('cuda'))

This is the end of the article on "what's the difference between to (device) and map_location=device in pytorch". Thank you for reading! I believe you all have a certain understanding of the knowledge of "what is the difference between to (device) and map_location=device in pytorch". If you want to learn more, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report