In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
The purpose of this article is to share with you the content of the example analysis of Python feature map extraction based on Pytorch. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Brief introduction
In order to understand the operation process of convolution neural network, it is necessary to visually display the operation results of convolution neural network.
It can be roughly divided into the following steps:
Extraction of a single image
Construction of Neural Network
Extraction of feature graphs
Visual display
Extraction of a single image
According to the target requirements, it is necessary to convolution a single picture, but the torch.utils.data.DataLoader class is mainly used to read the data in Pytorch, so we need to write a program to read a single picture.
Def get_picture (picture_dir, transform):''this algorithm reads pictures And convert its type to Tensor 'tmp = [] img = skimage.io.imread (picture_dir) tmp.append (img) img = skimage.io.imread ('. / picture/4.jpg') tmp.append (img) img256 = [skimage.transform.resize (img, (256,256) for img in tmp] img256 = np.asarray (img256) img256 = img256.astype (np.float32) return transform (img256 [0])
Note: the input of the neural network is in four-dimensional form, and the picture we return is in three-dimensional form, so we need to insert a dimension using unsqueeze ().
Construction of Neural Network
The network is based on LeNet, but for convenience of display, the parameters in the network are modified according to 2562563.
The network is built as follows:
Class LeNet (nn.Module):''this class inherits the torch.nn.Modul class to construct the LeNet neural network model' 'def _ init__ (self): super (LeNet, self). _ _ init__ () # layer 1 neural network Including convolution layer, linear activation function, pooling layer self.conv1 = nn.Sequential (nn.Conv2d (3, 32, 5, 1, 2), # input_size= (3 '256' 256) Padding=2 nn.ReLU (), # input_size= (32 '256' 256) nn.MaxPool2d (kernel_size=2, stride=2), # output_size= (32 '128' 128) # layer 2 neural network Include convolution layer, linear activation function, pooling layer self.conv2 = nn.Sequential (nn.Conv2d (32, 64, 5, 1, 2), # input_size= (32 / 128 / 128) nn.ReLU (), # input_size= (64 / 128 / 128) nn.MaxPool2d (2) 2) # output_size= (6464)) # full connection layer (converting the multi-dimensional output of neurons of the neural network into one-dimensional) self.fc1 = nn.Sequential (nn.Linear (6464) # Linear transformation nn.ReLU () # ReLu activation) # output layer (processing the one-dimensional output of the full connection layer) self.fc2 = nn.Sequential (nn.Linear (128,84)) Nn.ReLU () # classifies the data in the output layer (output prediction) self.fc3 = nn.Linear (84,62) # defines the forward propagation process The input and output of x def forward (self, x): X = self.conv1 (x) x = self.conv2 (x) # nn.Linear () are all values of dimension one. Therefore, it is necessary to flatten the multi-dimensional tensor into an one-dimensional feature graph extraction of x = x.view (x.size () [0],-1) x = self.fc1 (x) x = self.fc2 (x) x = self.fc3 (x) return x
Go directly to the code:
Class FeatureExtractor (nn.Module): def _ init__ (self, submodule, extracted_layers): super (FeatureExtractor, self). _ _ init__ () self.submodule = submodule self.extracted_layers = extracted_layers def forward (self, x): outputs = [] for name Module in self.submodule._modules.items (): # currently does not show full connection layer if "fc" in name: X = x.view (x.size (0)) -1) print (module) x = module (x) print (name) if name in self.extracted_layers: outputs.append (x) return outputs Visualization
Visual display using matplotlib
The code is as follows:
# feature output Visualization for i in range (32): ax = plt.subplot (6,6, I + 1) ax.set_title ('Feature {}' .format (I)) ax.axis ('off') plt.imshow (x [0] .data.numpy () [0meme iMagazine:], cmap='jet') plt.plot () complete code
Paste the complete code here
Import osimport torchimport torchvision as tvimport torchvision.transforms as transformsimport torch.nn as nnimport torch.optim as optimimport argparseimport skimage.dataimport skimage.ioimport skimage.transformimport numpy as npimport matplotlib.pyplot as plt# defines whether to use GPUdevice = torch.device ("cuda" if torch.cuda.is_available () else "cpu") # Load training and testing datasets.pic_dir ='. / picture/3.jpg'# defines the data preprocessing method (converting input data similar to arrary in numpy to Tensor in pytorch (tensor) transform = transforms.ToTensor () def get_picture (picture_dir) Transform): the algorithm realizes reading pictures And convert its type to Tensor''img = skimage.io.imread (picture_dir) img256 = skimage.transform.resize (img, (256) Img256 = np.asarray (img256) img256 = img256.astype (np.float32) return transform (img256) def get_picture_rgb (picture_dir):''this function realizes the RGB three-channel color of the display picture' 'img = skimage.io.imread (picture_dir) img256 = skimage.transform.resize (img, (256,256) skimage.io.imsave ('. / picture/4.jpg'). Img256) # display # for i in range (3): # img = img256 [:,:, I] # ax = plt.subplot (1,3, I + 1) # ax.set_title ('Feature {}' .format (I)) # ax.axis ('off') # plt.imshow (img) # r = img256.copy () # r [:,: 0:2] = 0 # ax = plt.subplot (1,4,1) # ax.set_title ('B Channel') # # ax.axis ('off') # plt.imshow (r) # g = img256.copy () # g [:,:, 0] = 0 # g [:,:, 2] = 0 # ax = plt.subplot (1,4) 2) # ax.set_title ('G Channel') # # ax.axis ('off') # plt.imshow (g) # b = img256.copy () # b [:, 1:3] = 0 # ax = plt.subplot (1,4 3) # ax.set_title ('R Channel') # # ax.axis ('off') # plt.imshow (b) # img = img256.copy () # ax = plt.subplot (1,4 4) # ax.set_title ('image') # # ax.axis (' off') # plt.imshow (img) img = img256.copy () ax = plt.subplot () ax.set_title ('image') # ax.axis (' off') plt.imshow (img) plt.show () class LeNet (nn.Module):''this class inherits the torch.nn.Modul class Construct the LeNet neural network model''def _ _ init__ (self): super (LeNet) Self). _ _ init__ () # layer 1 neural network Including convolution layer, linear activation function, pooling layer self.conv1 = nn.Sequential (nn.Conv2d (3, 32, 5, 1, 2), # input_size= (3 '256' 256) Padding=2 nn.ReLU (), # input_size= (32 '256' 256) nn.MaxPool2d (kernel_size=2, stride=2), # output_size= (32 '128' 128) # layer 2 neural network Include convolution layer, linear activation function, pooling layer self.conv2 = nn.Sequential (nn.Conv2d (32, 64, 5, 1, 2), # input_size= (32 / 128 / 128) nn.ReLU (), # input_size= (64 / 128 / 128) nn.MaxPool2d (2) 2) # output_size= (6464)) # full connection layer (converting the multi-dimensional output of neurons of the neural network into one-dimensional) self.fc1 = nn.Sequential (nn.Linear (6464) # Linear transformation nn.ReLU () # ReLu activation) # output layer (processing the one-dimensional output of the full connection layer) self.fc2 = nn.Sequential (nn.Linear (128,84)) Nn.ReLU () # classifies the data in the output layer (output prediction) self.fc3 = nn.Linear (84,62) # defines the forward propagation process The input and output of x def forward (self, x): X = self.conv1 (x) x = self.conv2 (x) # nn.Linear () are all values of dimension one. Therefore, it is necessary to flatten the multi-dimensional tensor into one dimension x = x.view (x.size () [0],-1) x = self.fc1 (x) x = self.fc2 (x) x = self.fc3 (x) return x # Intermediate feature extraction class FeatureExtractor (nn.Module): def _ init__ (self, submodule, extracted_layers): super (FeatureExtractor Self). _ init__ () self.submodule = submodule self.extracted_layers = extracted_layers def forward (self, x): outputs = [] print (self.submodule._modules.items ()) for name, module in self.submodule._modules.items (): if "fc" in name: print (name) x = x.view (x.size (0)) -1) print (module) x = module (x) print (name) if name in self.extracted_layers: outputs.append (x) return outputsdef get_feature (): # input data img = get_picture (pic_dir Transform) # insert dimension img = img.unsqueeze (0) img = img.to (device) # feature output net = LeNet () .to (device) # net.load_state_dict (torch.load ('. / model/net_050.pth')) exact_list = ["conv1" "conv2"] myexactor = FeatureExtractor (net, exact_list) x = myexactor (img) # feature output Visualization for i in range (32): ax = plt.subplot (6,6, I + 1) ax.set_title ('Feature {}' .format (I)) ax.axis ('off') plt.imshow (x [0] .data.numpy () Cmap='jet') plt.show () # training if _ _ name__ = = "_ _ main__": get_picture_rgb (pic_dir) # get_feature () Thank you for reading! This is the end of this article on "sample Analysis of Python feature Map extraction based on Pytorch". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.