In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
How to use Pytorch training classifier, I believe that many inexperienced people do not know what to do. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
I. data
In general, when you work with image, text, voice, or video data, you can use standard python packages to load the data into an numpy array format, and then convert the array to torch.*Tensor
For images, you can use Pillow,OpenCV
For voice, you can use scipy,librosa
For text, you can load the module directly with Python or Cython basic data, or with NLTK and SpaCy
Especially for vision, we have created a package called totchvision, which contains a data loading module torchvision.datasets that supports loading common data sets such as Imagenet,CIFAR10,MNIST, and a data conversion module torch.utils.data.DataLoader that supports loading image data.
This provides great convenience and avoids writing "boilerplate code".
For this tutorial, we will use the CIFAR10 dataset, which contains ten categories: 'airplane',' automobile', 'bird',' cat', 'deer',' dog', 'frog',' horse', 'ship',' truck'. The image size in CIFAR-10 is 3 "32" 32, that is, the 3-layer color channel of RGB, and the size of each layer channel is 32 "32".
Picture 1 cifar10
Second, train an image classifier
We will do the following steps in order:
1. Use torchvision to load and normalize training and test data sets for CIFAR10
two。 Define a convolution neural network
3. Define a loss function
4. Train the network on the training sample data
5. Test the network on the test sample data
1. Load and normalize CIFAR10
Using torchvision, which is very simple to load CIFAR10 data, import torch
Import torchvision
Import torchvision.transformsastransforms
The output of the torchvision dataset is PILImage with a range of [0Magne1], and we convert them into tensor Tensors with a normalized range of [- 1Magne1].
Transform = transforms.Compose (
[transforms.ToTensor ()
Transforms.Normalize ((0.5,0.5,0.5), (0.5,0.5,0.5)])
Trainset = torchvision.datasets.CIFAR10 (root='./data', train=True
Download=True,transform=transform)
Trainloader = torch.utils.data.DataLoader (trainset, batch_size=4
Shuffle=True, num_workers=2)
Testset = torchvision.datasets.CIFAR10 (root='./data', train=False
Download=True, transform=transform)
Testloader = torch.utils.data.DataLoader (testset, batch_size=4
Shuffle=False, num_workers=2)
Classes = ('plane',' car', 'bird',' cat', 'deer',' dog', 'frog',' horse', 'ship',' truck')
Output:
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to. / data/cifar-10-python.tar.gz Files already downloaded and verified
Let's show some of the training pictures.
Import matplotlib.pyplot as plt
Import numpy as np
# Image display function
Defimshow (img):
Img = img / 2o0.5 # non-standard (unnormalized)
Npimg = img.numpy ()
Plt.imshow (np.transpose (npimg, (1,2,0)
Plt.show ()
# get some random images
Dataiter = iter (trainloader) images, labels = dataiter.next ()
# display image
Imshow (torchvision.utils.make_grid (images))
# print the class label
Print ('.join (' 5slots% classes [labels [j]] for j inrange (4)
Picture two
Output:
Cat car dog cat
two。 Define a convolution neural network
Before that, copy the neural network from the neural network chapter and modify it to a 3-channel picture (previously it was defined as 1-channel).
Import torch.nn as nn
Import torch.nn.functional as F
ClassNet (nn.Module):
Def _ init__ (self):
Super (Net, self). _ _ init__ ()
Self.conv1 = nn.Conv2d (3,6,5)
Self.pool = nn.MaxPool2d (2,2)
Self.conv2 = nn.Conv2d (6,16,5)
Self.fc1 = nn.Linear (16,55,120)
Self.fc2 = nn.Linear (120,84)
Self.fc3 = nn.Linear (84,10)
Defforward (self, x):
X = self.pool (F.relu (self.conv1 (x)
X = self.pool (F.relu (self.conv2 (x)
X = x.view (- 1,16,55,5)
X = F.relu (self.fc1 (x))
X = F.relu (self.fc2 (x))
X = self.fc3 (x)
Return x
Net = Net () 3. Define a loss function and optimizer
Let's use the classification cross entropy Cross-Entropy as the loss function and the momentum SGD as the optimizer.
Import torch.optim as optim
Criterion = nn.CrossEntropyLoss () optimizer = optim.SGD (net.parameters (), lr=0.001, momentum=0.9) 4. Training network
Things are starting to get interesting here, and we just need to loop through the data iterator to the network and optimizer input.
For epoch inrange (2): # iterate through the dataset multiple times
Running_loss = 0.0
For I, data inenumerate (trainloader, 0):
# get input
Inputs, labels = data
# Parameter gradient zero
Optimizer.zero_grad ()
# forward + reverse + optimization
Outputs = net (inputs)
Loss = criterion (outputs, labels)
Loss.backward ()
Optimizer.step ()
# output statistics
Running_loss + = loss.item ()
If I% 2000 output 1999: # every 2000 mini-batchs output print ('[% d,% 5d] loss:% .3f'%
(epoch + 1, I + 1, running_loss / 2000))
Running_loss = 0.0
Print ('Finished Training')
Output:
[1, 2000] loss: 2.211
[1, 4000] loss: 1.837
[1, 6000] loss: 1.659
[1, 8000] loss: 1.570
[1, 10000] loss: 1.521
[1, 12000] loss: 1.451
[2, 2000] loss: 1.411
[2, 4000] loss: 1.393
[2, 6000] loss: 1.348
[2, 8000] loss: 1.340
[2, 10000] loss: 1.363
[2, 12000] loss: 1.320
Finished Training
5. Test the network on the test set
We have trained the network twice through the training dataset, but we need to check to see if the network has learned something.
We will use the output of the neural network as the prediction class to check the prediction performance of the network, and proofread it with the real class mark of the sample. If the prediction is correct, we add the sample to the list of correct predictions.
OK, first step, let's show an image from the test set to familiarize ourselves with it.
Dataiter = iter (testloader) images, labels = dataiter.next ()
# print pictures
Imshow (torchvision.utils.make_grid (images))
Print ('GroundTruth:', '.join ('% 5slots% classes [labels]] for j inrange (4)
Picture three
Output:
GroundTruth: cat ship ship plane
Now let's take a look at what the neural network thinks these samples should be predicted:
Outputs = net (images)
The output is predicted to be similar to ten classes, and the higher the degree of approximation to a certain class, the more the network thinks that the image belongs to this category. So let's print the most similar category label:
_, predicted = torch.max (outputs, 1)
Print ('Predicted:', '.join ('% 5slots% classes [scheduled [j]])
For j inrange (4)
Output:
Predicted: cat car car ship
The result looks very good. Let's take a look at the performance of the network on the entire data set.
Correct = 0total = 0with torch.no_grad ():
For data in testloader:
Images, labels = data
Outputs = net (images)
_, predicted = torch.max (outputs.data, 1)
Total + = labels.size (0)
Correct + = (predicted = = labels). Sum (). Item ()
Print ('Accuracy of the network on the 10000 test images:% d%%'% (
100 * correct / total))
Output:
Accuracy of the network on the 10000 test images: 53%
This looks better than random prediction, which has an accuracy of 10% (which of the 10 categories is randomly predicted). Looks like the Internet has learned something.
Class_correct = list (0.for I inrange (10)) class_total = list (0.for I inrange (10)) with torch.no_grad ():
For data in testloader:
Images, labels = data
Outputs = net (images)
_, predicted = torch.max (outputs, 1)
C = (predicted = = labels) .squeeze ()
For i inrange (4):
Label = labels [I]
Class_ label [label] + = c [I] .item ()
Class_ total [label] + = 1
For i inrange (10):
Print ('Accuracy of% 5s:% 2d%'% (
Classes [I], 100 * class_ Total [I] / class_ Total [I])
Output:
Accuracy of plane: 52%
Accuracy of car: 73%
Accuracy of bird: 34%
Accuracy of cat: 54%
Accuracy of deer: 48%
Accuracy of dog: 26%
Accuracy of frog: 68%
Accuracy of horse: 51%
Accuracy of ship: 63%
Accuracy of truck: 60%
So what happens next? How do we run these neural networks on GPU?
Third, train on GPU
Just like how you transfer a tensor to GPU, you have to transfer the neural network to GPU.
If CUDA is available, let's first define our device as the first visible cuda device.
Device = torch.device ("cuda:0" if torch.cuda.is_available () else "cpu")
# suppose you are running on a CUDA machine, then a CUDA device number will be output here:
Print (device)
Output:
Cuda:0
The rest of this section assumes that the device is a CUDA device. These methods then recursively traverse all modules and convert their parameters and buffers to CUDA tensors.
Net.to (device)
Remember that you must also send input and destination to GPU at each step:
Inputs, labels = inputs.to (device), labels.to (device)
Why didn't you notice the huge acceleration compared to CPU? Because your network is very small.
Exercise: try to increase the width of your network (the first nn.Conv2d parameter is set to 2, the second nn.Conv2d parameter is set to 1 nn.Conv2d-they need to have the same number) and see how fast you get.
Goal:
Deeply understand the tensor and neural network of PyTorch
A small neural network is trained to classify images.
Fourth, train on multiple GPU
If you want to see massive acceleration, use all your GPU, check out: data parallelism (https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html).
After reading the above, have you mastered how to use Pytorch to train the classifier? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.