Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize gesture Detection based on Mediapipe+Opencv

2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

Today, I will show you how to achieve gesture detection based on Mediapipe+Opencv. The editor thinks that the content of the article is good. Now I would like to share it with you. Friends who feel in need can understand it. I hope it will be helpful to you. Let's read it along with the editor's ideas.

I. Preface

Based on Mediapipe+Opencv to achieve gesture detection, when you want to achieve gesture recognition, feel that gesture recognition is also quite important to come by the way.

2. Environment configuration software:

ANACONDA3+Pycharm2019

Environment:

Opencv-python > = 4.5.5

Mediapipe > = 0.8.9.1

Note: be sure to turn off the science Internet.

III. All source codes

It is relatively short and there is only one source file, MediapipeHandTracking.py. I will post it here directly.

MediapipeHandTracking.py program structure:

Step 1: save the gesture recognition solution in mediapipe to mpHands,hands,mpDraw

Step 2: parameter setting

Step 3: loop read the video stream to img,img, input the hands.hands function to get the result, draw the result to img and output it.

MediapipeHandTracking.py source code and comments import cv2import mediapipe as mpimport time# step 1: save the gesture recognition solution in mediapipe to mpHands,hands,mpDraw mpHands = mp.solutions.hands # the following three are API call templates in mediapipe hands = mpHands.Hands (min_detection_confidence=0.5, min_tracking_confidence=0.5) # minimum detection confidence Minimum tracking confidence mpDraw = mp.solutions.drawing_utils # get the drawing kit for mediapipe solution # step 2: set parameters handLmsStyle = mpDraw.DrawingSpec (color= (0,0,255), thickness=3) # draw the color and thickness of the key points in the hand handConStyle = mpDraw.DrawingSpec (color= (0,255,0) Thickness=5) # the color of the hand line is drawn with the thickness pTime = 0 # and the cTime below is used to calculate the video input stream FPScTime = 0cap = cv2.VideoCapture (0) # turn on the camera numbered 0 This is generally built-in camera # step 3: loop read the video stream to img,img input hands.hands function to get the result, draw the result to img and output while True: ret, img = cap.read () # read the picture from cap to img, and save the result whether the reading is successful or not in ret if ret: imgRGB = cv2.cvtColor (img, cv2.COLOR_BGR2RGB) # Model training using RGB training Result = hands.process (imgRGB) # input RGB images into the hand model and save the results in result # print (result.multi_hand_landmarks) # print result.multi_hand_landmarks content You can remove the imgHeight = img.shape [0] # get the high imgWidth of the camera image = img.shape [1] # get the wide if result.multi_hand_landmarks:# of the camera picture if the multi_hand_landmarks is not empty enter the loop for handLms in result.multi_hand_landmarks:# to traverse every hand_landmark in the multi_hand_landmarks (hand key points) Relative to traversing each hand in the picture mpDraw.draw_landmarks (img, handLms, mpHands.HAND_CONNECTIONS, handLmsStyle, handConStyle) # call the drawing toolkit in mediapipe to draw the hand key points for I, lm in enumerate (handLms.landmark): # I save the first hand key point Lm saves the normalized value of the point in the graph xPos = int (lm.x * imgWidth) # the I key point x yPos = int (lm.y * imgHeight) # the I key point y cv2.putText (img, str (I), (xPos-25, yPos+5), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (0,0,255), 2) # draw the key point in If (img, (xPos, yPos), 10, (166,0,0), cv2.FILLED) # draw a circle # print (I, xPos) YPos) # print the coordinates of this point cTime = time.time () fps = 1 / (cTime-pTime) pTime = cTime cv2.putText (img, f "FPS: {int (fps)}", (30,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0), 3) # draw FSP to cv2.imshow ('img') Img) # output picture if cv2.waitKey (1) = = ord ('q'): # Click the video Enter Q to exit break 4, environment configuration 1, and create a new environment Gesture on Anaconda3

Open Anaconda Prompt and enter:

Conda create-n Gesture python=3.8

2. Activate the Gesture environment and download the opencv-python package

Activate environment: conda activate Gesture

Download the opencv-python package: pip install opencv-python-I https://pypi.tuna.tsinghua.edu.cn/simple/

3. Download the mediapipe package

Pip install mediapipe-I https://pypi.tuna.tsinghua.edu.cn/simple/

4. Open Pycharm to complete the environment import project

Configure the code runtime environment

5. Run the program:

Open the folder containing the hanTracking.py program with Pycharm and run

Running result

Sixth, the program application extension 1, the position and order of the key points of the hand, we all know the characteristics.

This function can be used for picture ROI extraction, intercept the picture, and then perform some other image operations.

This feature can be used to respond to events with gestures. For example, it is agreed that an event will be triggered when the index finger and thumb touch on the 4th and 8th.

Oh, wait a minute.

Implement AL+ operation

2. Combine with other AL

For example, posture detection AL, can identify a person as a match person, the development space has a lot of use.

3. Full-body detection source code import cv2import timeimport mediapipe as mpmp_drawing = mp.solutions.drawing_utilsmp_holistic = mp.solutions.holisticholistic = mp_holistic.Holistic (min_detection_confidence=0.5, min_tracking_confidence=0.5) handLmsStyle = mp_drawing.DrawingSpec (color= (0,0,255), thickness=0) # draw the color and thickness of the key points in the hand handConStyle = mp_drawing.DrawingSpec (color= (0,255,0) Thickness=4) # Color and thickness of hand lines cap = cv2.VideoCapture (0) while True: ret,image=cap.read () if ret: image= cv2.flip (image, 1) image=cv2.cvtColor (image,cv2.COLOR_BGR2RGB) results = holistic.process (image) if results.pose_landmarks: mp_drawing.draw_landmarks (image, results.face_landmarks, mp_holistic.FACEMESH_CONTOURS,handLmsStyle HandConStyle) mp_drawing.draw_landmarks (image, results.left_hand_landmarks, mp_holistic.HAND_CONNECTIONS) mp_drawing.draw_landmarks (image, results.right_hand_landmarks, mp_holistic.HAND_CONNECTIONS) mp_drawing.draw_landmarks (image, results.pose_landmarks, mp_holistic.POSE_CONNECTIONS) cv2.imshow ("img" Image) if cv2.waitKey (1) = = ord ("Q"): breakholistic.close ()

The running effect is as follows:

Bask in my handsome roommate.

These are all about how to achieve gesture detection based on Mediapipe+Opencv. For more content related to how to achieve gesture detection based on Mediapipe+Opencv, you can search the previous articles or browse the following articles to learn! I believe the editor will add more knowledge to you. I hope you can support it!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report