In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to use Mobilenet and Keras for transfer learning, the content of the article is of high quality, so the editor will share it with you for reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.
I'll show you an example of using Mobilenet to classify images of dogs. Then, I'll show you an example that misclassifies the images of blue tits. Then, I will retrain Mobilenet and use migration learning so that it can correctly classify the same input image. In the process, only two classifiers are used, but this can be extended to the number you want, limited to the number and time of hardware you can use.
The source file of Mobilenet is located at: https://arxiv.org/pdf/1704.04861.pdf
MobileNets: an efficient convolution neural network for mobile vision applications.
We use Mobilenet because its architecture is lightweight. It uses depth-separable convolution, which basically means that it performs a single convolution on each color channel, rather than flattening it by combining all three convolutions. This has the effect of filtering the input channel. Or as the author of this article clearly explains: "for MobileNets, depth convolution applies a single filter to each input channel. Then, 1 × 1 convolution is applied point by point convolution to combine output depth convolution. Standard convolution can filter and combine inputs into a new set of outputs in one step. The depth separable convolution divides it into two layers, a separate layer for filtering and a single layer for combination. This factorization has the effect of significantly reducing the size of calculations and models. "
The difference between point-by-point and depth convolution
So the overall structure of Mobilenet is as follows, with 30 layers.
Convolution layer with step size 2
Depthwise layer
Pointwise layer to double the number of channels
Depthwise layer with step size 2
Pointwise layer to double the number of channels
Wait
Complete architecture of Mobilenet
Its maintenance cost is very low and it performs very well at high speed. There are also many types of pre-training models in which the size of the network in memory is proportional to the number of parameters used on disk. The speed and power consumption of the network are proportional to the number of MACs (Multiply-Accumulates), which is a measure of the number of integrated multiplication and addition operations.
Now let's look at the code!
All my code: https://github.com/ferhat00/Deep-Learning/tree/master/Transfer%20Learning%20CNN
Let's load the necessary packages and libraries.
We import a pre-trained model from Keras.
Let's try some tests on the images of different breeds of dogs.
Output:
Output:
Output:
So far so good. It classifies each kind of dog very well, so let's try a kind of bird blue tit.
Blue Finch
Output:
You can see that it can't recognize the blue tit. It wrongly classifies the image as a chicken. This is a native bird native to North America, and there are subtle differences:
Tit
Now let's manipulate the Mobilenet architecture, retrain the first few layers and use migration learning. To do this, we need to train it with some images. Here, I will train with images of blue tits and crows. However, instead of manually downloading their images, you use Google Images to search and pull the images. To do this, we can import a good package.
View https://github.com/hardikvasa/google-images-download
Let's re-use MobileNet now, because it's very 17Mb, let's add and train the first few layers. Notice that I can only train two classifiers, blue tits and crows.
Let's examine the model architecture
We will use pre-trained weights because the model has been trained on the Imagenet dataset. We make sure that ownership weights are untrainable and only train the last few dense layers.
Now let's load the training data into ImageDataGenerator. Specify the path, which automatically sends bulk training data to simplify the code.
Compile the model. Now let's train. It takes less than two minutes on GTX1070 GPU.
The model has now been trained. Now let's test some independent input images to check the prediction.
Output:
As you can see, it correctly predicts the image of the crow because the image of the blue tit has been commented out.
Crow
This can be further extended to more images, and a larger number of classifiers can be better popularized, but it is the lightest and fastest way to achieve CNN transfer learning. It certainly depends on the speed, accuracy, and hardware you want to implement the model, and how much time you have available.
On how to use Mobilenet and Keras for transfer learning to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.