Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize gesture recognition with Python

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how to achieve gesture recognition with Python". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to achieve gesture recognition with Python".

Get video (camera)

There is nothing to say about this part, but to get the camera.

Cap = cv2.VideoCapture ("C:/Users/lenovo/Videos/1.mp4") # read file # cap = cv2.VideoCapture (0) # read camera while (True): ret, frame = cap.read () key = cv2.waitKey (50) & 0xFF if key = = ord ('q'): breakcap.release () cv2.destroyAllWindows ()

Skin color detection

The elliptical skin color detection model is used here.

In RGB space, the skin color of human face is greatly affected by the brightness, so it is difficult to separate the skin color points from the non-skin color points, that is to say, after processing in this space, the skin color points are discrete points, and there are many non-skin color points embedded in the middle, which brings problems for skin color region calibration (face calibration, eyes, etc.). If you change RGB to YCrCb space, you can ignore the influence of Y (brightness), because the space is little affected by brightness, and the skin color will produce good clustering. In this way, the three-dimensional space will be two-dimensional CrCb, skin color points will form a certain shape, such as: the face will see an area of the face, the arm will see the shape of an arm.

Def A (img):

YCrCb = cv2.cvtColor (img, cv2.COLOR_BGR2YCR_CB) # convert to YCrCb space (yGramcrdjcb) = cv2.split (YCrCb) # split out YMagne Cr, CB value cr1 = cv2.GaussianBlur (cr, (5Mae 5), 0) _, skin = cv2.threshold (cr1, 0,255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # Ostu processing res = cv2.bitwise_and (img,img, mask = skin) return res Contour processing

Contour processing, then mainly use two functions, cv2.findContours and cv2.drawContours, the use of these two functions is easy to find not to say, this part of the main problem is to extract a lot of contours, but we only need the contours of the hand, so we have to use the sorted function to find the largest contours.

Def B (img):

# binaryimg = cv2.Canny (Laplacian, 50,200) # binarization Canny detection h = cv2.findContours (img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) # find the contour contour = h [0] contour = sorted (contour, key = cv2.contourArea, reverse=True) # sort the area of the contoured area # contourmax = contour [0] [:, 0,:] # keep the contour point coordinate bg = np.ones (dst.shape Np.uint8) * 25mm create a white screen ret = cv2.drawContours (bg,contour [0],-1, (0Power0), 3) # draw black outline return ret full code "Save as picture from video reading frame"import cv2import numpy as npcap = cv2.VideoCapture (" C:/Users/lenovo/Videos/1.mp4 ") # read file # cap = cv2.VideoCapture (0) # read camera

# skin test def A (img):

YCrCb = cv2.cvtColor (img, cv2.COLOR_BGR2YCR_CB) # convert to YCrCb space (ymemcrdenceCB) = cv2.split (YCrCb) # split out YMagne Cr, CB value cr1 = cv2.GaussianBlur (cr, (5Mae 5), 0) _, skin = cv2.threshold (cr1, 0,255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # Ostu processes res = cv2.bitwise_and (img,img, mask = skin) return res

Def B (img):

# binaryimg = cv2.Canny (Laplacian, 50,200) # binarization Canny detection h = cv2.findContours (img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) # find the contour contour = h [0] contour = sorted (contour, key = cv2.contourArea, reverse=True) # sort the area of the contoured area # contourmax = contour [0] [:, 0,:] # keep the contour point coordinate bg = np.ones (dst.shape Np.uint8) * 25mm create a white screen ret = cv2.drawContours (bg,contour [0],-1, (0memo 0d0), 3) # draw a black outline return ret

While (True):

Ret, frame = cap.read () # the following three lines can be adjusted according to your computer src = cv2.resize (frame, (400350), interpolation=cv2.INTER_CUBIC) # window size cv2.rectangle (src, (90,60), (300,300), (0,255,0)) # frame out the intercept position roi = src [60RV 300,90RV 300] # to get the gesture block diagram

Res = A (roi) # for skin color detection cv2.imshow ("0", roi)

Gray = cv2.cvtColor (res, cv2.COLOR_BGR2GRAY) dst = cv2.Laplacian (gray, cv2.CV_16S, ksize = 3) Laplacian = cv2.convertScaleAbs (dst)

Contour = B (Laplacian) # Contour processing cv2.imshow ("2", contour)

Key = cv2.waitKey (50) & 0xFF if key = = ord ('q'): breakcap.release () cv2.destroyAllWindows () Thank you for your reading. The above is the content of "how to realize gesture recognition in Python". After the study of this article, I believe you have a deeper understanding of how to achieve gesture recognition in Python, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report