Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize a face recognition function in NodeJS

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

In this issue, the editor will bring you about how to achieve a face recognition function in NodeJS. The article is rich in content and analyzes and describes for you from a professional point of view. I hope you can get something after reading this article.

Since Baidu provides the Node.js version of SDK, we can use SDK to quickly enter the development first. First go to Baidu AI:

Https://ai.baidu.com/

Log in to the console and click on the left face recognition to start creating applications.

After the creation is successful, enter the management application, and you can get the APPID,API KEY,SECRET KEY needed for development. After obtaining these three parameter values, we download a Node.js version of face recognition SDK:

Https://ai.baidu.com/sdk#bfr

After all the preparatory work has been completed, let's create a new Node.js project. Then put the extracted SDK into the Node project.

You can see that we have put the unzipped SDK into our project. Now enter SDK on the terminal, and then install the required dependencies for SDK. The command is:

Npm install

Then go back to the home directory of the project, take the entire SDK as a dependency, and install the dependency in the home directory, with the command:

Npm install baidu-aip.sdk

To introduce SDK into our local project here, we are done. Let's take a look at the SDK documentation:

We can write an interface to test the first interface: face detection.

First, we store the three keys we applied in config.js

Create a client object and call the service interface to use it.

The request module has been encapsulated in the SDK for sending network requests, and we can set the parameters of request according to our own needs.

Next, we can call face detection. SDK has encapsulated the interface for us. The interface for face detection is divided into optional parameters and non-optional parameters. What's the difference? We can take a look at the API request parameters:

You can see that we must pass two parameters: image and imageType:

Image means that pictures can be url, base64 and FACE_TOKEN, while imageType is the image type. Available values include URL, BASE64, and FACE_TOKEN.

The other three parameters are optional, so let's first take a look at how the interface without optional parameters can be called:

Client.detect (image, imageType)

We call this method and pass the parameter image to the network image url and imageType to URL to see what the effect will be.

By calling the API below, we can see that the image is parsed and a series of data is returned.

What on earth are so many parameters for? We can take a look at the description of the return parameters in the official document:

Because there are too many parameters returned, I will not take screenshots one by one. Here I post the SDK document of Node version, which can be checked by myself:

Https://cloud.baidu.com/doc/FACE/Face-Node-SDK.html#.E4.BA.BA.E8.84.B8.E6.A3.80.E6.B5.8B

With the documentation, we can know the meaning of the return parameters.

Face_num represents that when a face face_list is detected in the picture, it contains the specific information of the face. Location locates the position of the face relative to the picture, and the client can display the face with special effects according to the location information. Face_token is a unique identification string generated by image parsing. Parsing image can be directly passed into face_token face_probability to represent the possibility that the picture is a human face, and 1 is the maximum probability of angle and the degree of rotation of human face.

The analysis result will return the location of the face relative to the image, and the client can focus the face according to the location information. What are the optional parameters of the second interface? We can take another look at the parameters just requested:

We have three optional parameters:

Face_field: specify the returned information max_face_num: maximum number of faces can be detected face_type: photo type

When we call the detect method, we need to pass multiple options parameters as follows:

Client.detect (image, imageType, options)

We use this method to specify the return age, specify that the maximum detection face is 2, and the photo type is life photo.

We can take a look at the results:

We can see that an extra age field is returned, and the estimated age returned by parsing.

There is no problem with the first interface here, but there will be a question? We just called the detect method, so why just return the parsing information? In fact, the complete process should be:

Obtain access_token credential through appID and api_secret to initiate http request face detection API, and data is returned after successful detection.

But the SDK of getting access_token and initiating http request has already been encapsulated, so we can use it directly. We can print the request parameters in the encapsulated http request we just called, and then initiate the request again:

We will see that the interface for requesting face detection has been fully encapsulated, you only need to call the detect method, and the SDK steps in the middle have all been processed for you.

Face registration

Next, let's take a look at the face registration API. Face registration will bind the face to the groupId and userId we set. Face registration can call the method:

Client.addUser (image, imageType, groupId, userId)

Let's first take a look at the list of parameters that need to be carried to initiate the request:

Not to mention the first two parameters, group_id is the group id,user_id for the user id. We can write an interface to see the effect:

We call the client.addUser () method to register faces. Let's take a look at the results:

As an old rule, post the document's explanation of the return parameters:

Like face analysis, optional parameters can be selected for user data, image quality and other parameters. If the face is registered, the user data will be bound. I won't explain too much here. You can take a look at the document and test it yourself.

Face search

We registered a face on the last interface, and we can call the face search interface to see if we can find the face. First, take a look at the methods that need to be called for face search:

Client.search (image, imageType, groupIdList)

The third parameter that this method needs to explain is to search for faces in a specified group, separated if it is searched in multiple groups at the same time. As the old rule, let's first take a look at the list of request parameters that are called using this method:

We call the face search API without optional parameters to test:

Next, call to see if face search is effective:

You can see that we found the face in group 1, and we can test what will happen if we query in a group where the face does not exist.

The above is how to achieve a face recognition function in the NodeJS shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report