Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the parallel data processing in PyTorch

2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

What is the parallel processing of data in PyTorch? aiming at this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

Using multiple GPU through PyTorch is very simple. You can put the model in a GPU:

Device = torch.device ("cuda:0") model.to (device)

Then, you can copy all the tensors to GPU:

Mytensor = my_tensor.to (device)

Note that a call to my_tensor.to (device) returns a new copy of my_tensor on GPU instead of rewriting my_tensor. You need to assign him a new tensor and use it on GPU.

It is natural to perform feedforward and feedback operations in multi-GPU. However, PyTorch uses only one GPU by default. By using DataParallel to make your model run in parallel, you can easily run your operations on multiple GPU.

Model = nn.DataParallel (model)

This is the core of the entire tutorial, which we will explain in more detail next.

References and parameters

Introducing PyTorch modules and defining parameters

Import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader

# parameters

Input_size = 5 output_size = 2 batch_size = 30 data_size = 100

Equipment

Device = torch.device ("cuda:0" if torch.cuda.is_available () else "cpu")

Experimental (toy) data

Generate a toy data. You just need to implement getitem.

Class RandomDataset (Dataset): def _ init__ (self, size, length): self.len = length self.data = torch.randn (length, size) def _ getitem__ (self, index): return self.data [index] def _ len__ (self): return self.lenrand_loader = DataLoader (dataset=RandomDataset (input_size, data_size), batch_size=batch_size, shuffle=True)

Simple model

To make a small demo, our model just takes an input, performs a linear operation, and then gives an output. However, you can use DataParallel in any model (CNN, RNN, Capsule Net, etc.)

We place an output declaration in the model to detect the size of the output and input tensors. Notice the output in batch rank 0.

Class Model (nn.Module): # Our model def _ init__ (self, input_size, output_size): super (Model, self). _ _ init__ () self.fc = nn.Linear (input_size, output_size) def forward (self, input): output = self.fc (input) print ("\ tIn Model: input size", input.size (), "output size" Output.size () return output

Create models and process data in parallel

This is the core of the whole tutorial. First we need an example of the model, and then verify that we have multiple GPU. If we have more than one GPU, we can wrap our model with nn.DataParallel. Then we use model.to (device) to put the model into multiple GPU.

Model = Model (input_size, output_size) if torch.cuda.device_count () > 1: print ("Let's use", torch.cuda.device_count (), "GPUs!") # dim = 0 [30, xxx]-> [10,...], [10,...], [10,...] On 3 GPUs model = nn.DataParallel (model) model.to (device)

Output:

Let's use 2 GPUs!

Run the model:

Now we can see the magnitude of the input and output tensors.

For data in rand_loader: input = data.to (device) output = model (input) print ("Outside: input size", input.size (), "output_size", output.size ())

Output:

In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) In Model: input size torch.Size ([15]) 5]) output size torch.Size ([15,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30) 2]) In Model: input size torch.Size ([5,5]) output size torch.Size ([5,2]) In Model: input size torch.Size ([5,5]) output size torch.Size ([5,2]) Outside: input size torch.Size ([10,5]) output_size torch.Size ([10,2])

Results:

If you have no GPU or only one GPU, when we get 30 inputs and 30 outputs, the model will expect 30 inputs and 30 outputs. But if you have more than one GPU, you will get this result.

Multiple GPU

If you have 2 GPU, you will see:

# on 2 GPUsLet's use 2 GPUs! In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) In Model: input size torch.Size ([15]) 5]) output size torch.Size ([15,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) In Model: input size torch.Size ([15,5]) output size torch.Size ([15,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30) 2]) In Model: input size torch.Size ([5,5]) output size torch.Size ([5,2]) In Model: input size torch.Size ([5,5]) output size torch.Size ([5,2]) Outside: input size torch.Size ([10,5]) output_size torch.Size ([10,2])

If you have three GPU, you will see:

Let's use 3 GPUs! In Model: input size torch.Size ([10,5]) output size torch.Size ([10,2]) In Model: input size torch.Size ([10,5]) output size torch.Size ([10,2]) In Model: input size torch.Size ([10,5]) output size torch.Size ([10,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([10]) 5]) output size torch.Size ([10,2]) In Model: input size torch.Size ([10,5]) output size torch.Size ([10,2]) In Model: input size torch.Size ([10,5]) output size torch.Size ([10,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([10,5]) output size torch.Size ([10]) 2]) In Model: input size torch.Size ([10,5]) output size torch.Size ([10,2]) In Model: input size torch.Size ([10,5]) output size torch.Size ([10,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4] 5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2,2]) Outside: input size torch.Size ([10,5]) output_size torch.Size ([10,2])

If you have eight GPU, you will see:

Let's use 8 GPUs! In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4) 5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) Outside: input size torch.Size ([30,5]) output_size torch.Size ([30] 2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4) ) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) Outside: input size torch.Size ([30]) 5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4) 2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([4,5]) output size torch.Size ([4,2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2,2]) Outside: input size torch.Size ([30] 5]) output_size torch.Size ([30,2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2,2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2,2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2,2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2) 2]) In Model: input size torch.Size ([2,5]) output size torch.Size ([2,2]) Outside: input size torch.Size ([10,5]) output_size torch.Size ([10,2])

Data parallelism automatically splits your data and sends task orders to multiple GPU. When each model completes its own task, DataParallel collects and merges the results, and then returns them to you.

This is the answer to the question about the parallel processing of data in PyTorch. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report