Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the goals and benefits of Pytorch convolution neural network migration

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

This article is a detailed introduction to "What are the goals and benefits of Pytorch convolutional neural network migration?" The content is detailed, the steps are clear, and the details are properly handled. I hope this article "What are the goals and benefits of Pytorch convolutional neural network migration?" can help you solve your doubts. Let's go deeper and learn new knowledge together with the ideas of the small editor.

1. Classical convolutional neural network

On the pytorch website, we can see many classic convolutional neural networks.

Here is a brief introduction to the development of classical convolutional neural networks

1. First of all, it can be said that Alexnet is the first work of convolutional neural networks (the 12-year champion). Here, let's briefly talk about the disadvantages of convolutional kernel. Large step size, no filling layer, drastic feature extraction, easy to ignore some important features.

2. The second is VGG network, its convolution kernel size is 3*3, there is an advantage is that after the pooling layer, the number of channels doubled, you can retain more features, this is a feature of VGG

In the following period of time, there is a problem, we all know that deep learning with the continuous increase in the number of training, the effect should be better and better, but here there is a problem, research found that with the continuous improvement of VGG network, the effect is not as good as the original, this time people think, deep learning can only develop here, then encountered a bottleneck.

3. Next, with the proposal of residual network (Resnet), the above problem is solved. The advantage of this network is that it retains the original features. If the extracted features after convolution are not as good as the original ones, then the original features will be retained. This problem will be solved. The following is the resnet network model.

Here are some training comparisons:

II. Goals of Transfer Learning

First of all, the goal of transfer learning is to train our model with the weight parameters and bias parameters trained by others.

III. Benefits

The amount of data to train in deep learning is very large. When we have a small amount of data, the weight parameters we train will not be so good, so at this time we can use the weight parameters trained by others, and the bias parameters will be used to improve the accuracy of our model.

IV. Steps

Transfer learning can be roughly divided into three steps

1. loading model

2. frozen layer

3. fully connected layer

V. Code

This is resnet152.

import torchimport torchvision as tvimport torch.nn as nnimport torchvisionimport torch.nn.functional as Fimport torchvision.transforms as transformsimport torchfrom torch.utils import datafrom torch import optimfrom torch.autograd import Variablemodel_name='resnet'featuer_extract=Truetrain_on_gpu=torch.cuda.is_available()if not train_on_gpu: print("no gpu")else : print("is gpu")devic=torch.device("cuda:0" if torch.cuda.is_available() else 'cpu')teature_extract=Truedef set_paremeter_requirements_grad(model,featuer_extract): if featuer_extract: for parm in model.parameters(): parm.requires_grad=False def initialize_model(model_name,num_classes,featuer_extract,use_pretrained=True): model_ft = None input_size = 0 if model_name=="resnet": model_ft=tv.models.resnet152(pretrained=use_pretrained)#Download Model set_paremeter_requirements_grad(model_ft,featuer_extract) #Frozen Layers num_ftrs=model_ft.fc.in_features #Change fully connected layers model_ft.fc=nn.Sequential(nn.Linear(num_ftrs,num_classes), nn.LogSoftmax(dim=1)) input_size=224 #input dimension return model_ft,input_sizemodel_ft,iput_size=initialize_model(model_name,10,featuer_extract,use_pretrained=True)model_ft=model_ft.to(devic)params_to_updata=model_ft.parameters()if featuer_extract: params_to_updata=[] for name,param in model_ft.named_parameters(): if param.requires_grad==True: params_to_updata.append(param) print("\t",name)else: for name,param in model_ft.parameters(): if param.requires_grad==True: print("\t",name)opt=optim.Adam(params_to_updata,lr=0.01)loss=nn.NLLLoss()if __name__ == '__main__': transform = transforms.Compose([ #Image enhancement transforms.Resize(1024),#crop transforms.RandomHorizontalFlip(),#Random HorizontalFlip transforms.RandomCrop(224),#RandomCrop transforms.ColorJitter(brightness=0.5, contrast=0.5, hue=0.5), #brightness #Change to Tensor Regularization transforms.ToTensor(), #transform format transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) #Normalize ]) trainset = tv.datasets.CIFAR10( root=r'E:\desktop\profile\cv3\dataset\cifar-10-batches-py', train=True, download=True, transform=transform ) trainloader = data.DataLoader( trainset, batch_size=8, drop_last=True, shuffle=True, #shuffle num_workers=4, ) testset = tv.datasets.CIFAR10( root=r'E:\desktop\profile\cv3\dataset\cifar-10-batches-py', train=False, download=True, transform=transform ) testloader = data.DataLoader( testset, batch_size=8, drop_last=True, shuffle=False, num_workers=4 ) for epoch in range(3): running_loss=0 for index,data in enumerate(trainloader,0): inputs, labels = data inputs = inputs.to(devic) labels = labels.to(devic) inputs, labels = Variable(inputs), Variable(labels) opt.zero_grad() h=model_ft(inputs) loss1=loss(h,labels) loss1.backward() opt.step() h+=loss1.item() if index==9: avg_loss=loss1/10. running_loss=0 print('avg_loss',avg_loss) if index0==99 : correct=0 total=0 for data in testloader: images,labels=data outputs=model_ft(Variable(images.cuda())) _,predicted=torch.max(outputs.cpu(),1) total+=labels.size(0) bool_tensor=(predicted==labels) correct+=bool_tensor.sum() print ('1000 test set accuracy %d %%'%(100*correct/total)) Read here, this article "Pytorch convolutional neural network migration goals and benefits are what" article has been introduced, want to master the knowledge points of this article also need to practice to understand, if you want to know more related content of the article, welcome to pay attention to the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report