Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the function of face Detection in Android

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the relevant knowledge of "how to achieve face detection function in Android". The editor shows you the operation process through an actual case. The operation method is simple, fast and practical. I hope this article "how to achieve face detection function in Android" can help you solve the problem.

1. Project configuration

First, in order to add the Vision library to your project, you need to import Play Services 8.1 or higher into your project. This tutorial imports only the Play Services Vision library. Open the build.gradle file in your project and add the following compilation dependency node code.

Compile 'com.google.android.gms:play-services-vision:8.1.0'

When you have included Play Services in your project, you can close the build.gradle file in the project and open the AndroidManifest.xml file. Add the following data to your manifest file to define the dependencies for face detection. Let the Vision library know that you will use it in the application.

Once you have finished configuring AndroidManifest.xml, you can close the file. Next, you need to create a new class file, FaceOverlayView.java. This class inherits from the View class and is used to perform face detection logic, display analyzed images and draw information on the images to illustrate points.

Now, we begin to add member variables and implement the constructor. The Bitmap (bitmap) object is used to store the bitmap data to be analyzed, and the SparseArray array is used to store the face information found in the image.

Public class FaceOverlayView extends View {private Bitmap mBitmap; private SparseArray mFaces; public FaceOverlayView (Context context) {this (context, null);} public FaceOverlayView (Context context, AttributeSet attrs) {this (context, attrs, 0);} public FaceOverlayView (Context context, AttributeSet attrs, int defStyleAttr) {super (context, attrs, defStyleAttr);}}

Then, we add a setBitmap (Bitmap bitmap) function to the FaceOverlayView class, and now we only use this function to store bitmap objects, which we will use to analyze bitmap data later.

Public void setBitmap (Bitmap bitmap) {mBitmap = bitmap;}

Next, we need a bitmap picture. I've added one to the sample project on GitHub, and of course you can use any picture you like and see if it works. When you have selected the picture, put it in the res/raw directory. This tutorial assumes that the name of the picture is face.jpg.

When you put the picture in the res/raw directory, open the res/layout/activity_main.xml file. Reference a FaceOverlayView object in the layout file so that it is displayed in MainActivity.

After defining the layout file, open MainActivity and reference an instance of FaceOverlayView in the onCreate () function. Read face.jpg from the raw folder through the input stream and convert it to bitmap data. Once you have the bitmap data, you can set the bitmap in the custom view by calling the setBitmap method of FaceOverlayView.

Public class MainActivity extends AppCompatActivity {private FaceOverlayView mFaceOverlayView; @ Override protected void onCreate (Bundle savedInstanceState) {super.onCreate (savedInstanceState); setContentView (R.layout.activity_main); mFaceOverlayView = (FaceOverlayView) findViewById (R.id.face_overlay); InputStream stream = getResources (). OpenRawResource (R.raw.face); Bitmap bitmap = BitmapFactory.decodeStream (stream); mFaceOverlayView.setBitmap (bitmap);}}

2. Detect human face

Now that your project is set up, it's time to start detecting faces. Define a FaceDetector object in the setBitmap (Bitmap bitmap) method. We can do this by using the constructor in FaceDetector. Through FaceDetector.Builder, you can define multiple parameters to control the speed of face detection and other data generated by FaceDetector.

The specific settings depend on the use of your application. If facial feature search is turned on, the speed of face detection becomes very slow. In most programming, everything has its advantages and disadvantages. If you want to learn more about FaceDetector.Builder, you can find it by looking at the official documentation of the Android developer website.

FaceDetector detector = new FaceDetector.Builder (getContext ()) .setTrackingEnabled (false) .setLandmarkType (FaceDetector.ALL_LANDMARKS) .setMode (FaceDetector.FAST_MODE) .build ()

You need to check whether FaceDetector is operable. Every time a user uses face detection on a device, the Play Services service needs to load a set of small local libraries to handle application requests. Although this work is usually done before the application starts, it is also necessary to handle failures well.

If FaceDetector is operable, then you need to convert the bitmap data into Frame objects, which are passed in through the detect function for face data analysis. When you have finished analyzing the data, you need to release the detector to prevent memory leaks. * * call the invalidate () function to trigger the view refresh.

If (! detector.isOperational ()) {/ / Handle contingency} else {Frame frame = new Frame.Builder (). SetBitmap (bitmap). Build (); mFaces = detector.detect (frame); detector.release ();} invalidate ()

Now you have found the face information in the picture and can use it. For example, you can draw a box along each face detected. After the invalidate () function call, we can add all the necessary logic to the OnDraw (Canvas canvas) function. We need to make sure that the bitmap and face data are valid, draw the bitmap data on the canvas after that, and then draw a box along the orientation of each face.

Because different devices have different resolutions, you need to control the size of the bitmap to ensure that the picture is always displayed correctly.

@ Override protected void onDraw (Canvas canvas) {super.onDraw (canvas); if ((mBitmap! = null) & & (mFaces! = null)) {double scale = drawBitmap (canvas); drawFaceBox (canvas, scale);}}

The drawBitmap (Canvas canvas) method draws an adaptive-sized image on the canvas and returns a correct scaling value for you to use.

Private double drawBitmap (Canvas canvas) {double viewWidth = canvas.getWidth (); double viewHeight = canvas.getHeight (); double imageWidth = mBitmap.getWidth (); double imageHeight = mBitmap.getHeight (); double scale = Math.min (viewWidth / imageWidth, viewHeight / imageHeight); Rect destBounds = new Rect (0,0, (int) (imageWidth * scale), (int) (imageHeight * scale)); canvas.drawBitmap (mBitmap, null, destBounds, null); return scale;}

The drawFaceBox (Canvas canvas, double scale) method will be more interesting. The detected face data is stored in mFaces in the form of location information. This method will draw a green rectangle based on the width and height of these location data in the detected face position.

You need to define your own drawing objects, then cycle through your SparseArray array to find out the position, height, and width information, and then use this information to draw rectangles on the canvas.

Private void drawFaceBox (Canvas canvas, double scale) {/ / paint should be defined as a member variable rather than / / being created on each onDraw request, but left here for / / emphasis. Paint paint = new Paint (); paint.setColor (Color.GREEN); paint.setStyle (Paint.Style.STROKE); paint.setStrokeWidth (5); float left = 0; float top = 0; float right = 0; float bottom = 0; for (int I = 0; I

< mFaces.size(); i++ ) { Face face = mFaces.valueAt(i); left = (float) ( face.getPosition().x * scale ); top = (float) ( face.getPosition().y * scale ); right = (float) scale * ( face.getPosition().x + face.getWidth() ); bottom = (float) scale * ( face.getPosition().y + face.getHeight() ); canvas.drawRect( left, top, right, bottom, paint ); } } 这时运行你的应用程序,你会发现每张被检测到的人脸都被矩形包围着。值得注意的是,现在我们所使用的人脸检测API版本非常新,所以它不一定能检测到所有的人脸。你可以通过修改FaceDetector.Builder中的配置,使它获得到更多的信息,但是我不能保证这一定会起作用。 3、理解面部特征 面部特征指的是脸上的一些特殊点。人脸检测API不是依靠面部特征来检测一张人脸,而是在检测到人脸之后才能检测面部特征。这就是为什么检测面部特征是一个可选的设置,我们可以通过FaceDetector.Builder开启。 你可以把这些面部特征信息做为一个附加的信息来源,例如需找模特的眼睛在哪里,这样就可以在应用中做相应的处理了。有十二种面部特征是可能被检测出来的: 左右眼 左右耳朵 左右耳垂 鼻子 左右脸颊 左右嘴角 嘴 面部特征的检测取决于检测的角度。例如,有人侧对着的话,那么只能检测到他的一个眼睛,这意味着另一只眼睛不会被检测到。下表概述了哪些面部特征应该检测到(Y是基于脸部的欧拉角(左或右))。 欧拉角 Y可见的标志< -36°左眼、左嘴角、左耳朵、鼻子、左脸颊-36° to -12°左嘴角、鼻子、下嘴角、右眼、左眼、左脸颊、左耳垂-12° to 12°右眼、左眼、鼻子、左脸颊、右脸颊、左嘴角、右嘴角、下嘴角12° to 36°右嘴角、鼻子、下嘴角、左眼、右眼、右脸颊、右耳垂>

36 °right eye, right corner of mouth, right ear, nose, right cheek

If you have turned on facial feature detection in face detection, you can easily use facial feature information. You just need to call the getLandmarks () function to get a list of facial features, and you can use it directly.

In this tutorial, you can use a new function drawFaceLandmarks (Canvas canvas, double scale) to draw a small circle on each facial feature detected in face detection, and in the onDraw (canvas canvas) function, replace drawFaceBox with drawFaceLandmarks. This method takes the position of each facial feature point as the center, adapts the bitmap size, and uses a circle to circle the facial feature points.

Private void drawFaceLandmarks (Canvas canvas, double scale) {Paint paint = new Paint (); paint.setColor (Color.GREEN); paint.setStyle (Paint.Style.STROKE); paint.setStrokeWidth (5); for (int I = 0; I < mFaces.size (); iTunes +) {Face face = mFaces.valueAt (I); for (Landmark landmark: face.getLandmarks ()) {int cx = (int) (landmark.getPosition () .x * scale); int cy = (int) (landmark.getPosition (). Y * scale) Canvas.drawCircle (cx, cy, 10, paint);}

After calling this method, you should see the following image, where the facial feature points are circled by a small green circle.

4. Additional facial data

Face location and facial feature information is very useful, in addition, we can also use the built-in method of Face to obtain more information about face detection. We can tell whether a person's left and right eyes are open and smile by the return values of the getIsSmilingProbability (), getIsLeftEyeOpenProbability () and getIsRightEyeOpenProbability () methods (ranging from 0. 0 to 1. 0). The closer the number is to 1.0, the more likely it is.

You can also get the Euler values of the Y and Z axes through face detection, and the Euler values of the Z axes must be returned. If you want to receive the values of the X axis, then you must use an accurate mode when testing. Here is an example of how or these values.

Private void logFaceData () {float smilingProbability; float leftEyeOpenProbability; float rightEyeOpenProbability; float eulerY; float eulerZ; for (int I = 0; I < mFaces.size (); iTunes +) {Face face = mFaces.valueAt (I); smilingProbability = face.getIsSmilingProbability (); leftEyeOpenProbability = face.getIsLeftEyeOpenProbability (); rightEyeOpenProbability = face.getIsRightEyeOpenProbability (); eulerY = face.getEulerY (); eulerZ = face.getEulerZ (); Log.e ("Tuts+ Face Detection", "Smiling:" + smilingProbability) Log.e ("Tuts+ Face Detection", "Left eye open:" + leftEyeOpenProbability); Log.e ("Tuts+ Face Detection", "Right eye open:" + rightEyeOpenProbability); Log.e ("Tuts+ Face Detection", "Euler Y:" + eulerY); Log.e ("Tuts+ Face Detection", "Euler Z:" + eulerZ);}} this is the end of the content about "how to achieve face detection in Android". Thank you for reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report