In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to use Python code to easily handle image classification and prediction, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can get something.
Image classification is a hot topic in the field of artificial intelligence. generally speaking, it is to distinguish different types of targets according to the different characteristics reflected in the image information. Image classification uses computer to quantitatively analyze the image, classifying each pixel or region in the image into one of several categories, instead of human visual interpretation.
In real life, we will also encounter the application scene of image classification, for example, we often identify flower information by taking pictures of flowers, face-to-person information and so on. Usually, image recognition or classification tools collect data on the client side and calculate the results on the server side.
The editor will try to quickly build the function of image classification on the cloud function through an interesting Python library, and combine with API gateway to provide API function to achieve a "image classification API" of Serverless architecture.
Getting started with ImageAI
First, we need a dependency library: ImageAI.
What is ImageAI? The official document describes it as follows:
ImageAI is a python library designed to enable developers to build applications and systems with deep learning and computer vision capabilities in a few simple lines of code. Based on the principle of simplicity, ImageAI supports the most advanced machine learning algorithms for image prediction, custom image prediction, object detection, video detection, video object tracking and image prediction training. ImageAI currently supports image prediction and training using four different machine learning algorithms trained on ImageNet-1000 data sets. ImageAI also supports object detection, video detection, and object tracking using RetinaNet trained on COCO datasets. Eventually, ImageAI will provide broader and more professional support for computer vision, including but not limited to image recognition in special environments and areas.
Simply understand that the ImageAI dependent library can help users to complete the basic image recognition and video target extraction. However, although ImageAI provides some data sets and models, we can also carry out additional training and customized expansion according to our own needs.
Its official code gives a simple Demo:
From imageai.Prediction import ImagePredictionimport osexecution_path = os.getcwd () prediction = ImagePrediction () prediction.setModelTypeAsResNet () prediction.setModelPath (os.path.join (execution_path, "resnet50_weights_tf_dim_ordering_tf_kernels.h6")) prediction.loadModel () predictions, probabilities = prediction.predictImage (os.path.join (execution_path, "1.jpg"), result_count=5) for eachPrediction, eachProbability in zip (predictions, probabilities): print (eachPrediction + ":" + eachProbability)
We can make a preliminary run locally and specify the image 1.jpg as the following image:
The results can be obtained:
Convertible: 52.459537982940674sports_car: 37.61286735534668pickup: 3.175118938088417car_wheel: 1.8175017088651657minivan: 1.7487028613686562 Let ImageAI be on the cloud (deployed on Serverless architecture)
With the above Demo, we can consider deploying this module to the cloud function:
First, create a Python project locally: mkdir imageDemo
Create a new file: vim index.py
According to some special forms of cloud function, we have partially modified Demo.
Put the initialization code on the outer layer
Put the prediction part as the part that needs to be executed for the trigger in the entry method (in this case, main_handler)
The combination of cloud function and API gateway is not very friendly to support binary files, so the image is transmitted through base64 here.
The input parameter is defined as {"picture": the base64 of the picture, and the output parameter is defined as: {"prediction": the result of the image classification}
The code for the implementation is as follows:
From imageai.Prediction import ImagePredictionimport os, base64, random execution_path = os.getcwd () prediction = ImagePrediction () prediction.setModelTypeAsSqueezeNet () prediction.setModelPath (os.path.join (execution_path, "squeezenet_weights_tf_dim_ordering_tf_kernels.h6")) prediction.loadModel () def main_handler (event, context): imgData = base64.b64decode (event ["body"]) fileName ='/ tmp/' + ".join (random.sample ('zyxwvutsrqponmlkjihgfedcba', 5)) with open (fileName 'wb') as f: f.write (imgData) resultData = {} predictions, probabilities = prediction.predictImage (fileName, result_count=5) for eachPrediction, eachProbability in zip (predictions, probabilities): resultData [eachPrediction] = eachProbability return resultData
After the creation is complete, download the dependent model:
SqueezeNet (file size: 4.82 MB, shortest prediction time, moderate accuracy)
ResNet50 by Microsoft Research (file size: 98 MB, fast prediction time and high accuracy)
InceptionV3 by Google Brain team (file size: 91.6 MB, slower prediction time and higher accuracy)
DenseNet121 by Facebook AI Research (file size: 31.6MB, slow prediction time, highest accuracy)
Since we are only used for testing, we can choose a smaller model: SqueezeNet:
Copy the model file address in the official document:
Install directly using wget:
Wget https://github.com/OlafenwaMoses/ImageAI/releases/download/1.0/squeezenet_weights_tf_dim_ordering_tf_kernels.h6
Next, do a dependency installation:
Since Tencent Cloud Serveless products do not support online installation of dependencies in Python Runtime, you need to manually package dependencies and upload them. Among the various dependency libraries of Python, many dependencies may have the process of compiling and generating binaries, which makes the packaged dependencies in different environments not universal.
Therefore, the best way is to package through the corresponding operating system + language version. We do dependency packaging in the context of CentOS+Python3.6.
For many MacOS and Windows users, this is really not a very friendly process, so in order to facilitate your use, I have made an online packaging dependency tool on the Serverless architecture, so you can directly use this tool for packaging:
After generating the compressed package, download and decompress it directly and put it into your own project:
The final step is to create a serverless.yaml
ImageDemo: component: "@ serverless/tencent-scf" inputs: name: imageDemo codeUri:. / handler: index.main_handler runtime: Python3.6 region: ap-guangzhou description: image recognition / classification Demo memorySize: 256 timeout: 10 events:-apigw: name: imageDemo_apigw_service parameters: protocols:-http ServiceName: serverless description: image recognition / classification DemoAPI environment: release endpoints:-path: / image method: ANY
After the completion, execute the sls-debug deployment. During the deployment, you will log in by scanning the code. After logging in, you can wait. After completion, you can see the deployment address.
Basic test
Tested in Python language, the interface address is the + / image just copied, for example:
Import jsonimport urllib.requestimport base64 with open ("1.jpg", 'rb') as f: base64_data = base64.b64encode (f.read ()) s = base64_data.decode () url =' http://service-9p7hbgvg-1256773370.gz.apigw.tencentcs.com/release/image' print (urllib.request.urlopen (urllib.request.Request (url = url)) Data= json.dumps ({'picture': s}) .encode ("utf-8")) .read () .decode ("utf-8"))
Search for a picture on the Internet:
Get the running result:
{"prediction": {"cheetah": 83.12643766403198, "Irish_terrier": 2.315458096563816, "lion": 1.847699833470726, "teddy": 1.6655176877975464, "baboon": 1.5562783926725388}}
Through this result, we can see that the basic classification / prediction of the image has been successful. In order to prove the delay of this interface, the program can be basically modified:
Import urllib.requestimport base64, time for i in range (0J10): start_time = time.time () with open ("1.jpg", 'rb') as f: base64_data = base64.b64encode (f.read ()) s = base64_data.decode () url =' http://service-9p7hbgvg-1256773370.gz.apigw.tencentcs.com/release/image' print (urllib.request.urlopen (urllib.request.Request (url = url) Data= json.dumps ({'picture': s}) .encode ("utf-8")) .read () .decode ("utf-8") print ("cost:", time.time ()-start_time)
Output result:
{"prediction": {"cheetah": 83.12643766403198, "Irish_terrier": 2.315458096563816, "lion": 1.847699833470726, "teddy": 1.6655176877975464, "baboon": 1.5562783926725388} cost: 2.1161561012268066 {"prediction": {"cheetah": 83.126437603198, "Irish_terrier": 2.315458096563816, "lion": 1.847699833470726, "teddy": 1.6655176877975464, "baboon": 1.55627839725388} cost: 1.125925304932 "prediction": {cheetah: {83.643736403198 " "Irish_terrier": 2.315458096563816, "lion": 1.8476998334707726, "teddy": 1.6655176877975464, "baboon": 1.5562783926725388} cost: 1.33227705950537 {"prediction": {"cheetah": 83.12643766403198, "Irish_terrier": 2.315458096563816, "lion": 1.8476998433470726, "teddy": 1.6655176877975464, "baboon": 1.55627839725388}} cost: 1.3562259674072266 {prediction ": {" cheetah ": 83.12643766403198," Irish_terrier ": 2.31580965638726" lion ": 1.847647433470726 "teddy": 1.6655176877975464, "baboon": 1.55627839725388} cost: 1.0821418762207 {"prediction": {"cheetah": 83.12643766403198, "Irish_terrier": 2.315458096563816, "lion": 1.8476998433470726, "teddy": 1.55176877975464, "baboon": 1.55627839725388} cost: 1.4290671348175777 {"prediction": {"cheetah": 83.12643766403198, "Irish_terrier": 2.15458096563816, "lion": 1.847699833470726, "teddy": 1.551768775464} "baboon": 1.5562783926725388} cost: 1.5917718410491943 {"prediction": {"cheetah": 83.12643766403198, "Irish_terrier": 2.315458096563816, "lion": 1.847699833470726, "teddy": 1.6655176877975464, "baboon": 1.5562783926725388} cost: 1.17279005065918 {"prediction": {"cheetah": 83.126437663198, "Irish_terrier": 2.315804596563816, "lion": 1.847699833470726, "teddy": 1.6655176877975464 "baboon": 1.5562783926725388} cost: 2.962592840194702 {"prediction": {"cheetah": 83.12643766403198, "Irish_terrier": 2.315458096563816, "lion": 1.847699833470726, "teddy": 1.6655176877975464, "baboon": 1.5562783926725388}} cost: 1.22480010986312
From the above set of data, we can see that the overall time consumption is basically controlled between 1-1.5 seconds.
Of course, if you want to do more tests on the performance of the interface, for example, through concurrent testing to see the performance of the interface in the case of concurrency.
So far, we have done the Python version of the image recognition / classification gadget built by Serveerless architecture.
It can be said that there are many applications related to artificial intelligence under the Serverless architecture. The editor realizes an interface for image classification / prediction through an existing dependency library. ImageAI, a dependent library, has a relatively high degree of freedom and can be used to customize its own model according to its own needs. This article is worth throwing a brick to attract more people to deploy their own "artificial intelligence" API through the Serverless architecture.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.