Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to recognize LeNet Model when Pytorch writes numbers

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly analyzes the relevant knowledge points of how to identify the LeNet model by writing numbers in Pytorch. The content is detailed and easy to understand, and the operation details are reasonable, which has a certain reference value. If you are interested, you might as well follow the editor to have a look, and follow the editor to learn more about "how to recognize the LeNet model when Pytorch writes numbers".

LeNet network

The resolution of the LeNet network remains the same when passing through the convolution layer, and becomes smaller when passing through the pooling layer. Implement the following

From PIL import Imageimport cv2import matplotlib.pyplot as pltimport torchvisionfrom torchvision import transformsimport torchfrom torch.utils.data import DataLoaderimport torch.nn as nnimport numpy as npimport tqdm as tqdmclass LeNet (nn.Module): def _ init__ (self)-> None: super (). _ _ init__ () self.sequential = nn.Sequential (nn.Conv2d), nn.Sigmoid (), nn.AvgPool2d (kernel_size=2 Stride=2), nn.Conv2d (6 and 16), nn.Sigmoid (), nn.AvgPool2d (kernel_size=2,stride=2), nn.Flatten (), nn.Linear (16025120), nn.Sigmoid () Nn.Linear (120m 84), nn.Sigmoid (), nn.Linear (84 m 10)) def forward (self) X): return self.sequential (x) class MLP (nn.Module): def _ _ init__ (self)-> None: super (). _ _ init__ () self.sequential = nn.Sequential (nn.Flatten (), nn.Linear (28028120), nn.Sigmoid (), nn.Linear (120Power84), nn.Sigmoid () Nn.Linear (84 cuda' 10) def forward (self,x): return self.sequential (x) epochs = 15batch = nn.CrossEntropyLoss () model = LeNet () optimizer = torch.optim.SGD (model.parameters (), lr) device = torch.device ('cuda') root = r ". /" trans_compose = transforms.Compose ([transforms.ToTensor ()) ]) train_data = torchvision.datasets.MNIST (root,train=True,transform=trans_compose,download=True) test_data = torchvision.datasets.MNIST (root,train=False,transform=trans_compose,download=True) train_loader = DataLoader (train_data,batch_size=batch,shuffle=True) test_loader = DataLoader (test_data,batch_size=batch Shuffle=False) model.to (device) loss.to (device) # model.apply (init_weights) for epoch in range (epochs): train_loss = 0 test_loss = 0 correct_train = 0 correct_test = 0 for index, (xmemy) in enumerate (train_loader): X = x.to (device) y = y.to (device) predict = model (x) L = loss (predict) Y) optimizer.zero_grad () L.backward () optimizer.step () train_loss = train_loss + L correct_train + = (predict.argmax (dim=1) = = y). Sum () acc_train = correct_train/ (batch*len (train_loader)) with torch.no_grad (): for index, (xMague y) in enumerate (test_loader): [xMagne y] = [x.to (device) Y.to (device)] predict = model (x) L1 = loss (predict,y) test_loss = test_loss + L1 correct_test + = (predict.argmax (dim=1) = = y). Sum () acc_test = correct_test/ (batch*len (test_loader)) print (f'epoch: {epoch}, train_loss: {train_loss/batch}, test_loss: {test_loss/batch}, acc_train: {acc_train} Acc_test: {acc_test}') training result

Epoch:12,train_loss:2.235553741455078,test_loss:0.3947642743587494,acc_train:0.9879833459854126,acc_test:0.9851238131523132

Epoch:13,train_loss:2.028963804244995,test_loss:0.3220392167568207,acc_train:0.9891499876976013,acc_test:0.9875199794769287

Epoch:14,train_loss:1.8020273447036743,test_loss:0.34837451577186584,acc_train:0.9901833534240723,acc_test:0.98702073097229

Generalization ability test

Find a picture and divide it into a picture with only one number for testing.

Images_np = cv2.imread ("/ content/R-C.png", cv2.IMREAD_GRAYSCALE) h cv2.IMREAD_GRAYSCALE w = images_np.shapeimages_np = np.array (255*torch.ones (h) W))-images_np# picture inverse images = Image.fromarray (images_np) plt.figure (1) plt.imshow (images) test_images = [] for i in range (10): for j in range (16): test_images.append (images_ NP [h / / 10*i:h//10+h//10*i) W//16*j:w//16*j+w//16]) sample = test_images [77] sample_tensor = torch.tensor (sample) .unsqueeze (0) .type (torch.FloatTensor) .to (device) sample_tensor = torch.nn.functional.interpolate (sample_tensor, (2828)) predict = model (sample_tensor) output = predict.argmax () print (output) plt.figure (2) plt.imshow (np.array (sample_tensor.squeeze (). To ('cpu'))

At this time, the prediction result is 4, and the prediction is correct. You can see from this code that there is an inverse step, if not, the result will be affected, as shown in the following figure, the prediction is 0, error.

The picture used for input by the model is a single-channel black-and-white picture. Here, yellow appears in visualization, but it is actually black and white. Inverse color operation shows that data preprocessing is very important. A lot of data cannot be directly used for reasoning if it is not cleaned up.

Test the accuracy of all images used for generalization tests:

Correct = 0i = 0cnt = 1for sample in test_images: sample_tensor = torch.tensor (sample) .unsqueeze (0) .type (torch.FloatTensor) .to (device) sample_tensor = torch.nn.functional.interpolate (sample_tensor, (2828)) predict = model (sample_tensor) output= predict.argmax () if (output==i): correct+=1 if (cnt==0): iTunes 1 cnt+=1acc_g = correct/len (test_images) print (f'acc_g: {acc_g}')

If the color is not reversed, acc_g=0.15

The advantages of acc_g:0.50625pytorch 1.PyTorch is a very simple, efficient and fast framework; 2. The design pursues the least package; 3. Design is in line with human thinking, it allows users to focus on realizing their ideas as much as possible; 4. Similar to google's Tensorflow, FAIR's support is sufficient to ensure that PyTorch gets continuous development updates; a forum maintained by 5.PyTorch authors for users to communicate and consult questions 6. Getting started is simple.

About "Pytorch write numbers how to identify the LeNet model" is introduced here, more related content can search previous articles, hope to help you answer questions, please support the website!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report