Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize face Detection and camera Real-time example by opencv+mediapipe

2025-01-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly shows you "opencv+mediapipe how to achieve face detection and camera real-time example", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "opencv+mediapipe how to achieve face detection and camera real-time example" this article.

Key point detection of single face

Define visual image function

Import 3D facial key point detection model

Import visualization functions and visualization styles

Read Ima

Input the image model to obtain the prediction result

BGR to RGB

Input the RGB image into the model to obtain the prediction result

Predict the number of faces

Visual effect of facial key point detection

Draw the contours of human face and key areas, and return to annotated_image

Draw facial contours, eyelashes, eye sockets, lips

Visualization of face mesh, outline and pupil in 3D coordinates

Import cv2 as cvimport mediapipe as mpfrom tqdmimport tqdmimport timeimport matplotlib.pyplot as plt# defines visual image function def look_img (img): img_RGB=cv.cvtColor (img,cv.COLOR_BGR2RGB) plt.imshow (img_RGB) plt.show () # Import 3D facial key point detection model mp_face_mesh=mp.solutions.face_mesh# help (mp_face_mesh.FaceMesh) model=mp_face_mesh.FaceMesh (static_image_mode=True # TRUE: still images / False: camera reads refine_landmarks=True,# in real time using Attention Mesh model min_detection_confidence=0.5, # confidence threshold The closer to 1, the more accurate the min_tracking_confidence=0.5,# tracking threshold) # Import visualization function and visualization style mp_drawing=mp.solutions.drawing_utilsmp_drawing_styles=mp.solutions.drawing_styles# read image img=cv.imread ('img.png') # look_img (img) # input the image model and obtain the prediction result # BGR to RGBimg_RGB=cv.cvtColor (img,cv.COLOR_BGR2RGB) # input the RGB image into the model Get the prediction result results=model.process (img_RGB) # predict the number of human faces len (results.multi_face_landmarks) print (len (results.multi_face_landmarks)) # results: "Visualization of facial key points detection effect # drawing human faces and key area contours Return annotated_imageannotated_image=img.copy () if results.multi_face_landmarks:# if it detects a face for face_landmarks in results.multi_face_landmarks:# traverses every face # draw a face grid mp_drawing.draw_landmarks (image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION # landmark_drawing_spec is the key visualization style None is the default style (key points are not displayed) # landmark_drawing_spec=mp_drawing_styles.DrawingSpec (thickness=1,circle_radius=2,color= [66Magne77229]), landmark_drawing_spec=None Connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style () # draw facial contours, eyelashes, eye sockets, lips mp_drawing.draw_landmarks (image=annotated_image, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS, # landmark_drawing_spec as key points visualization style) None is the default style (key points are not displayed) # landmark_drawing_spec=mp_drawing_styles.DrawingSpec (thickness=1,circle_radius=2,color= [66Power77229]), landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style () # draw pupil area mp_drawing.draw_landmarks (image=annotated_image) Landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_IRISES, and # landmark_drawing_spec are key visualization styles. None is the default style (does not show keys) landmark_drawing_spec=mp_drawing_styles.DrawingSpec (thickness=1,circle_radius=2,color= [128256229]), # landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style () cv.imwrite ('test.jpg') Annotated_image) look_img (annotated_image) # visualize face mesh, outline, pupil mp_drawing.plot_landmarks (results.multi_face_landmarks [0], mp_face_mesh.FACEMESH_TESSELATION) mp_drawing.plot_landmarks (results.multi_face_landmarks [0], mp_face_mesh.FACEMESH_CONTOURS) mp_drawing.plot_landmarks (results.multi_face_landmarks [0], mp_face_mesh.FACEMESH_IRISES) in 3D coordinates

Face detection in a single image

You can build a 3D model by calling open3d, and some of the code is similar to the above

Import cv2 as cvimport mediapipe as mpimport numpy as npfrom tqdmimport tqdmimport timeimport matplotlib.pyplot as plt# defines visual image function def look_img (img): img_RGB=cv.cvtColor (img,cv.COLOR_BGR2RGB) plt.imshow (img_RGB) plt.show () # Import 3D facial key point detection model mp_face_mesh=mp.solutions.face_mesh# help (mp_face_mesh.FaceMesh) model=mp_face_mesh.FaceMesh (static_image_mode=True # TRUE: still image / False: camera reads refine_landmarks=True,# in real time using Attention Mesh model max_num_faces=40, min_detection_confidence=0.2, # confidence threshold The closer to 1, the more accurate the min_tracking_confidence=0.5,# tracking threshold) # Import visualization functions and visualization styles mp_drawing=mp.solutions.drawing_utils# mp_drawing_styles=mp.solutions.drawing_stylesdraw_spec=mp_drawing.DrawingSpec (thickness=2,circle_radius=1,color= [223 ~ 155)) # read the image img=cv.imread ('.. / facial 3D key point detection / dkx.jpg') # width=img1.shape [1] # height=img1.shape [0] # img=cv.resize (img1, (width*10) Height*10)) # look_img (img) # input the image model Get the prediction result # BGR to RGBimg_RGB=cv.cvtColor (img,cv.COLOR_BGR2RGB) # input RGB image into the model Get the forecast result results=model.process (img_RGB) # # predict the number of human faces # len (results.multi_face_landmarks) # # print (len (results.multi_face_landmarks)) if results.multi_face_landmarks: for face_landmarks in results.multi_face_landmarks: mp_drawing.draw_landmarks (image=img, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS) Landmark_drawing_spec=draw_spec, connection_drawing_spec=draw_spec) else: print ('no face detected') look_img (img) mp_drawing.plot_landmarks (results.multi_face_landmarks [0], mp_face_mesh.FACEMESH_TESSELATION) mp_drawing.plot_landmarks (results.multi_face_landmarks [1], mp_face_mesh.FACEMESH_CONTOURS) mp_drawing.plot_landmarks (results.multi_face_landmarks [1]) Mp_face_mesh.FACEMESH_IRISES) # Interactive 3D Visualization coords=np.array (results.multi_face_landmarks [0] .landmark) # print (len (coords)) # print (coords) def get_x (each): return each.xdef get_y (each): return each.ydef get_z (each): return each.z# gets the XYZ coordinates points_x=np.array (list (map (get_x)) of all key points respectively Coords)) points_y=np.array (list (map (get_y,coords) points_z=np.array (list (map (get_z,coords) # merge the coordinates of the three directions points=np.vstack ((points_x,points_y,points_z)) .Tprint (points.shape) import open3dpoint_cloud=open3d.geometry.PointCloud () point_cloud.points=open3d.utility.Vector3dVector (points) open3d.visualization.draw_geometries ([point_cloud])

This is a 3D visual model that can be rotated by dragging the mouse.

Camera real-time key point detection

Define visual image function

Import 3D facial key point detection model

Import visualization functions and visualization styles

Read single frame function

The main code is similar to the image above

Import cv2 as cvimport mediapipe as mpfrom tqdmimport tqdmimport timeimport matplotlib.pyplot as plt# imports 3D facial key point detection model mp_face_mesh=mp.solutions.face_mesh# help (mp_face_mesh.FaceMesh) model=mp_face_mesh.FaceMesh (static_image_mode=False,#TRUE: still images / False: camera real-time reading refine_landmarks=True,# using Attention Mesh model max_num_faces=5,# to detect up to several face min_detection_confidence=0.5 # confidence threshold The closer to 1, the more accurate the min_tracking_confidence=0.5,# tracking threshold) # Import visualization function and visual style mp_drawing=mp.solutions.drawing_utilsmp_drawing_styles=mp.solutions.drawing_styles# function to handle a single frame def process_frame (img): # record the start time of the frame processing start_time=time.time () img_RGB=cv.cvtColor (img Cv.COLOR_BGR2RGB) results=model.process (img_RGB) if results.multi_face_landmarks: for face_landmarks in results.multi_face_landmarks: # mp_drawing.draw_detection (# image=img, # landmarks_list=face_landmarks, # connections=mp_face_mesh.FACEMESH_TESSELATION, # landmarks_drawing_spec=None # landmarks_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style () #) # draw face mesh mp_drawing.draw_landmarks (image=img, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_TESSELATION) # landmark_drawing_spec is the key visualization style None is the default style (key points are not displayed) # landmark_drawing_spec=mp_drawing_styles.DrawingSpec (thickness=1,circle_radius=2,color= [66Magne77229]), landmark_drawing_spec=None Connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style () # draw facial contours, eyelashes, eye sockets, lips mp_drawing.draw_landmarks (image=img, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_CONTOURS # landmark_drawing_spec is the key visualization style None is the default style (key points are not displayed) # landmark_drawing_spec=mp_drawing_styles.DrawingSpec (thickness=1,circle_radius=2,color= [66Magne77229]), landmark_drawing_spec=None Connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style () # draw pupil area mp_drawing.draw_landmarks (image=img, landmark_list=face_landmarks, connections=mp_face_mesh.FACEMESH_IRISES, # landmark_drawing_spec as key points visualization style None is the default style (no keys are displayed) # landmark_drawing_spec=mp_drawing_styles.DrawingSpec (thickness=1, circle_radius=2, color= [0,1,128]), landmark_drawing_spec=None, connection_drawing_spec=mp_drawing_styles.get_default_face_mesh_tesselation_style () else: img = cv.putText (img,'NO FACE DELECTED', (25,50)) Cv.FONT_HERSHEY_SIMPLEX,1.25, (218,112,214), 1,8) # record the time when the frame is processed end_time=time.time () # calculate the number of frames per second of the image processed FPS FPS=1/ (end_time-start_time) scaler=1 img=cv.putText (img,'FPS'+str (int (FPS)), (25 percent scaler 100 frames scaler), cv.FONT_HERSHEY_SIMPLEX,1.25*scaler Return img# calls camera cap=cv.VideoCapture (0) cap.open (0) # Infinite Loop Until break is triggered while cap.isOpened (): success,frame=cap.read () # if not success: # print ('ERROR') # break frame=process_frame (frame) # shows the processed three-channel image cv.imshow (' my_window',frame) if cv.waitKey (1) & 0xff==ord ('q'): breakcap.release () cv.destroyAllWindows ()

The above is all the contents of the article "how to achieve face detection and camera real-time examples by opencv+mediapipe". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report