Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use opencv for gesture recognition in Python

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article will explain in detail how Python uses opencv for gesture recognition. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

Principle

First of all, the advanced hand detection, after finding it will do Hand Landmarks.

Find the 21 points of the palm, and then we can infer the gesture or what we are doing from the coordinates of the 21 points of the palm.

Program part

First install Opencv

Pip install opencv-python

Second install mediapipe

Pip install mediapipe

program

Call these two function libraries first.

Import cv2import mediapipe as mp

And then call the camera.

Cap = cv2.VideoCapture (0)

Function body part

While True: ret, img = cap.read () # read the current data if ret: cv2.imshow ('img',img) # display the currently read screen if cv2.waitKey (1) = = ord (' q'): # Press Q key to exit the program break

All functions

Import cv2import mediapipe as mpimport timecap = cv2.VideoCapture (1) mpHands = mp.solutions.handshands = mpHands.Hands () mpDraw = mp.solutions.drawing_utilshandLmsStyle = mpDraw.DrawingSpec (color= (0,0,255), thickness=3) handConStyle = mpDraw.DrawingSpec (color= (0,255,0), thickness=5) pTime = 0cTime = 0while True: ret, img = cap.read () if ret: imgRGB = cv2.cvtColor (img) Cv2.COLOR_BGR2RGB) result = hands.process (imgRGB) # print (result.multi_hand_landmarks) imgHeight = img.shape [0] imgWidth = img.shape [1] if result.multi_hand_landmarks: for handLms in result.multi_hand_landmarks: mpDraw.draw_landmarks (img, handLms, mpHands.HAND_CONNECTIONS, handLmsStyle, handConStyle) for I Lm in enumerate (handLms.landmark): xPos = int (lm.x * imgWidth) yPos = int (lm.y * imgHeight) # cv2.putText (img, str (I), (xPos-25, yPos+5), cv2.FONT_HERSHEY_SIMPLEX, 0.4,0,255) 2) # if I = = 4: # cv2.circle (img, (xPos, yPos), 20, (166,56,56), cv2.FILLED) # print (I, xPos, yPos) cTime = time.time () fps = 1 / (cTime-pTime) pTime = cTime cv2.putText (img, f "FPS: {int (fps)}" (30,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,0), 3) cv2.imshow ('img', img) if cv2.waitKey (1) = ord (' q'): break

In this way, we can display the key points and coordinates of our hands on the computer, and gesture recognition or other operations can be judged by the coordinates of the key points.

Another example of gesture recognition is attached

'' @ Time: 15:41 on 2021-2-6 @ Author: WGS@remarks:''"" Save as a picture from a video reading frame "" import cv2import numpy as np# cap = cv2.VideoCapture ("C:/Users/lenovo/Videos/wgs.mp4") # read the file cap = cv2.VideoCapture (0) # read the camera # skin detection def A (img): YCrCb = cv2.cvtColor (img Cv2.COLOR_BGR2YCR_CB) # convert to YCrCb space (y, cr, cb) = cv2.split (YCrCb) # split out YMagne Cr, CB value cr1 = cv2.GaussianBlur (cr, (5,5), 0) _, skin = cv2.threshold (cr1, 0,255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # Ostu handles res = cv2.bitwise_and (img, img) Mask=skin) return resdef B (img): # binaryimg = cv2.Canny (Laplacian, 50,200) # binarization Canny detection h = cv2.findContours (img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # find the contour contour = h [0] contour = sorted (contour, key=cv2.contourArea, reverse=True) # sort the area of the contoured area # contourmax = contour [0] [:, 0,:] # keep the contour point coordinate bg = np.ones (dst.shape Np.uint8) * 255 # create a white screen ret = cv2.drawContours (bg, contour [0],-1, (0,0,0), 3) # draw a black outline return retwhile (True): ret, frame = cap.read () # the following three lines can be adjusted according to your computer src = cv2.resize (frame, (400,350)) Interpolation=cv2.INTER_CUBIC) # window size cv2.rectangle (src, (90,60), (300,300), (0,255,0)) # frame out the intercept position roi = src [60 roi 300, 90 src 300] # get gesture block diagram res = A (roi) # for skin color detection cv2.imshow ("0", roi) gray = cv2.cvtColor (res, cv2.COLOR_BGR2GRAY) dst = cv2.Laplacian (gray Cv2.CV_16S, ksize=3) Laplacian = cv2.convertScaleAbs (dst) contour = B (Laplacian) # Contour processing cv2.imshow ("2", contour) key = cv2.waitKey (50) & 0xFF if key = = ord ('q'): breakcap.release () cv2.destroyAllWindows () on "how Python uses opencv for gesture recognition" ends here Hope that the above content can be helpful to you, so that you can learn more knowledge, if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report