In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to use Python to do image classification and prediction under Serverless framework". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to use Python to do image classification and prediction under Serverless framework".
This article will quickly build the function of image classification on the cloud function through an interesting Python library, and combine with API gateway to provide API function to achieve an "image classification API" of Serverless architecture.
First of all, I'd like to introduce you to the dependency library you need: ImageAI. Through the official documentation of the dependency, we can see the following description:
ImageAI is a python library designed to enable developers to build applications and systems with deep learning and computer vision capabilities in a few simple lines of code.
Based on the principle of simplicity, ImageAI supports the most advanced machine learning algorithms for image prediction, custom image prediction, object detection, video detection, video object tracking and image prediction training. ImageAI currently supports image prediction and training using four different machine learning algorithms trained on ImageNet-1000 data sets. ImageAI also supports object detection, video detection, and object tracking using RetinaNet trained on COCO datasets. Eventually, ImageAI will provide broader and more professional support for computer vision, including but not limited to image recognition in special environments and areas.
In other words, this dependent library can help us to complete the basic image recognition and video target extraction. Although he has given some data sets and models, we can also carry out additional training and customization expansion according to our own needs. Through the official code, we can see a simple Demo:
#-*-coding: utf-8-*-from imageai.Prediction import ImagePrediction # Model load prediction = ImagePrediction () prediction.setModelTypeAsResNet () prediction.setModelPath ("resnet50_weights_tf_dim_ordering_tf_kernels.h6") prediction.loadModel () predictions, probabilities = prediction.predictImage (". / picture.jpg", result_count=5) for eachPrediction, eachProbability in zip (predictions, probabilities): print (str (eachPrediction) + ":" + str (eachProbability))
(swipe left and right to view)
When we specify the picture.jpg picture as:
The result of our implementation is:
Laptop: 71.43893241882324 notebook: 16.265612840652466 modem: 4.899394512176514 hard_disc: 4.007557779550552 mouse: 1.2981942854821682
If you feel that the model is in use
Resnet50_weights_tf_dim_ordering_tf_kernels.h6 is too large and time-consuming, so you can choose a model according to your needs:
SqueezeNet (file size: 4.82 MB, shortest prediction time, moderate accuracy)
ResNet50 by Microsoft Research (file size: 98 MB, fast prediction time and high accuracy)
InceptionV3 by Google Brain team (file size: 91.6 MB, slower prediction time and higher accuracy)
DenseNet121 by Facebook AI Research (file size: 31.6MB, slow prediction time, highest accuracy)
The download address of the model can be found in Github address: https://github.com/OlafenwaMoses/ImageAI/releases/tag/1.0
Or refer to the official ImageAI document: https://imageai-cn.readthedocs.io/zh_CN/latest/ImageAI_Image_Prediction.html
Project Serverless
Write the entry method and initialize the project according to the requirements of the function calculation. At the same time, create a folder model under the current project, and copy the model file to this folder:
Overall process of the project:
Implementation code:
#-*-coding: utf-8-*-from imageai.Prediction import ImagePrediction import json import uuid import base64 import random # Response class Response: def _ _ init__ (self, start_response, response, errorCode=None): self.start = start_response responseBody = {'Error': {"Code": errorCode, "Message": response} } if errorCode else {'Response': response} # add uuid by default It is convenient to locate responseBody ['ResponseId'] = str (uuid.uuid1 ()) print ("Response:", json.dumps (responseBody)) self.response = json.dumps (responseBody) def _ iter__ (self): status =' 200' response_headers = [('Content-type',' application/json) Charset=UTF-8')] self.start (status, response_headers) yield self.response.encode ("utf-8") # Random string randomStr = lambda num=5: "" .join (random.sample ('abcdefghijklmnopqrstuvwxyz') Num) # Model load print ("Init model") prediction = ImagePrediction () prediction.setModelTypeAsResNet () print ("Load model") prediction.setModelPath ("/ mnt/auto/model/resnet50_weights_tf_dim_ordering_tf_kernels.h6") prediction.loadModel () print ("Load complete") def handler (environ, start_response): try: request_body_size = int (environ.get ('CONTENT_LENGTH') 0) except (ValueError): request_body_size = 0 requestBody = json.loads (environ ['wsgi.input'] .read (request_body_size) .decode ("utf-8")) # Image acquisition print ("Get pucture") imageName = randomStr (10) imageData = base64.b64decode (requestBody ["image"]) imagePath = "/ tmp/" + imageName with open (imagePath) 'wb') as f: f.write (imageData) # content prediction print ("Predicting...") Result = {} predictions, probabilities = prediction.predictImage (imagePath, result_count=5) print (zip (predictions, probabilities)) for eachPrediction, eachProbability in zip (predictions, probabilities): result [str (eachPrediction)] = str (eachProbability) return Response (start_response, result)
Required dependencies:
Tensorflow==1.13.1 numpy==1.19.4 scipy==1.5.4 opencv-python==4.4.0.46 pillow==8.0.1 matplotlib==3.3.3 h6py==3.1.0 keras==2.4.3 imageai==2.1.5
Write the configuration files required for deployment:
ServerlessBookImageAIDemo: Component: fc Provider: alibaba Access: release Properties: Region: cn-beijing Service: Name: ServerlessBook Description: Serverless Book case Log: Auto Nas: Auto Function: Name: serverless_imageAI Description: picture Target Detection CodeUri: Src:. / src Excludes:-src/.fun-src/ Model Handler: index.handler Environment:-Key: PYTHONUSERBASE Value: / mnt/auto/.fun/python MemorySize: 3072 Runtime: python3 Timeout: 60 Triggers:-Name: ImageAI Type: HTTP Parameters: AuthType: ANONYMOUS Methods:-GET -POST-PUT Domains:-Domain: Auto
In the code and configuration, you can see the existence of the directory: / mnt/auto/. This part is actually the address after nas is mounted. You only need to write it into the code in advance. The next step is to create the nas and configure the mount point.
Project deployment and testing
After completing the above steps, you can pass:
S deploy
Deploy the project, and you can see the results when the deployment is completed:
After the deployment is complete, you can use:
S install docker
Perform a dependent installation:
Depending on the installation completion, you can see the .fun directory generated under the directory, which is the dependency file packaged through docker, and these dependencies are exactly the dependencies we declared in the requirements.txt file.
When it is done, we pass:
S nas sync. / src/.fun
Package and upload the dependent directory to nas, and then upload the model directory after success:
S nas sync. / src/model
When you are finished, you can pass:
S nas ls-all
View catalog details:
When we are done, we can write a script for the test, which is also applicable to the test picture just now, through the code:
Import json import urllib.request import base64 import time with open ("picture.jpg", 'rb') as f: data = base64.b64encode (f.read ()). Decode () url=' http://35685264-1295939377467795.test.functioncompute.com/' timeStart = time.time () print (urllib.request.urlopen (urllib.request.Request (url=url)) Data=json.dumps ({'image': data}) .encode ("utf-8")) .read () .decode ("utf-8") print ("Time:", time.time ()-timeStart)
You can see the results:
{"Response": {"laptop": "71.43893837928772", "notebook": "16.265614330768585", "modem": "4.899385944008827", "hard_disc": "4.007565602660179", "mouse": "1.2981869280338287"}, "ResponseId": "1d74ae7e-298a-11eb-8374-024215000701"} Time: 29.16020894050598
As you can see, the function calculation returns the expected result smoothly, but the overall time is beyond imagination, which is nearly 30s. At this point, let's execute the test script again:
{"Response": {"laptop": "71.43893837928772", "notebook": "16.265614330768585", "modem": "4.899385944008827", "hard_disc": "4.007565602660179", "mouse": "1.2981869280338287"}, "ResponseId": "4b8be48a-298a-11eb-ba97-024215000501"} Time: 1.1511380672454834
As you can see, the time to execute again is only 1.15 seconds, a full 28 seconds higher than the last time.
Project optimization
In the last round of testing, we can see that the time gap between the first startup and the second startup of the project is mainly due to the fact that the function wasted a very long time when loading the model.
Even locally, we can simply test:
#-*-coding: utf-8-*-import time timeStart = time.time () # Model load from imageai.Prediction import ImagePrediction prediction = ImagePrediction () prediction.setModelTypeAsResNet () prediction.setModelPath ("resnet50_weights_tf_dim_ordering_tf_kernels.h6") prediction.loadModel () print ("Load Time:", time.time ()-timeStart) timeStart = time.time () predictions, probabilities = prediction.predictImage (". / picture.jpg", result_count=5) for eachPrediction, eachProbability in zip (predictions) Probabilities): print (str (eachPrediction) + ":" + str (eachProbability)) print ("Predict Time:", time.time ()-timeStart)
Execution result:
Load Time: 5.549695014953613 laptop: 71.43893241882324 notebook: 16.265612840652466 modem: 4.899394512176514 hard_disc: 4.007557779550552 mouse: 1.2981942854821682 Predict Time: 0.8137111663818359
As you can see, it takes a total of 5.5 seconds to load the imageAI module and the model file, and less than 1 second in the prediction part. In the function calculation, the machine performance itself is not as high as my local performance, in order to avoid the response time caused by each loading model is too long, in the deployed code, you can see that the model loading process is actually placed outside the entry method. One advantage of this is that there may not be a cold start every time the project is executed, that is, some objects can be reused on the premise of some reuse, that is, there is no need to reload the model, import dependencies, and so on.
So in the actual project, in order to avoid frequent requests, the instance repeatedly loads and creates some resources, we can put some resources at the time of initialization. In this way, the overall performance of the project can be greatly improved, and at the same time, with the reservation capacity provided by manufacturers, the negative impact of cold start of the function can be basically eliminated.
Thank you for your reading, the above is the content of "how to use Python to do image classification and prediction under Serverless framework". After the study of this article, I believe you have a deeper understanding of how to use Python to do image classification and prediction under Serverless framework. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Planning what you want to buildAn application A progranning libray A set of system configuration fil
© 2024 shulou.com SLNews company. All rights reserved.