In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article focuses on "how to use OpenCV dlib to achieve face collection", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "how to use OpenCV dlib to achieve face collection"!
1. Effect picture
Let's start with the last picture that has been tested:
You can also mark each part first:
two。 Principle
The main facial signs are: mouth, right eyebrow, left eyebrow, right eye, left eye, nose and jaw line
This section extracts these parts.
You can see from the figure that the array is assumed to have a subscript of 0:
Lips can be thought of as: points [48,68]. Inner lip: [605.68]
Right eyebrow points [17,22].
Left eyebrow points [22,27].
Right eye [36, 42].
Left eye [42,48].
Nose [27,35].
Mandible [0,17].
Already know the subscript, array slices, and use different colors to identify the various parts, imutils package, can help us write code more elegantly of the package; there is already an encapsulated method face_utils.
The lips, etc., are closed areas, represented by a closed convex hull, and the lower jaw drawn by a line.
The result of facial marker detection is: 68 (xPowery) coordinates:
(1) convert to Numpy array suitable for OpenCV processing first.
(2) Array slices to identify different facial structures with different colors.
3. Source code # installed dlib# imutils is the latest version # python detect_face_parts.py-- shape-predictor shape_predictor_68_face_landmarks.dat-- image images/girl.jpgfrom imutilsimport face_utilsimport numpy as npimport argparseimport imutilsimport dlibimport cv2import shutilimport os# build command line parameter #-- shape-predictor must shape detector location #-- image must be detected image ap = argparse.ArgumentParser () ap.add_argument ("- p", "--shape-predictor", required=True) Help= "path to facial landmark predictor") ap.add_argument ("- I", "- image", required=True, help= "path to input image") args = vars (ap.parse_args ()) temp_dir = "temp" shutil.rmtree (temp_dir, ignore_errors=True) os.makedirs (temp_dir) # initialize HOG-based facial detector in dlib And shape predictor detector = dlib.get_frontal_face_detector () predictor = dlib.shape_predictor (args ["shape_predictor"]) # load the image to be detected, resize, and replace it with grayscale image image = cv2.imread (args ["image"]) image = imutils.resize (image, width=500) gray = cv2.cvtColor (image, cv2.COLOR_BGR2GRAY) # detect facial rects = detector (gray, 1) # facial num = 0for (I) cycle detected in grayscale image Rect) in enumerate (rects): # identify the facial region for facial marker detection and convert the 68 points detected into Numpy array shape = predictor (gray, rect) shape = face_utils.shape_to_np (shape) # cycle through each part of the facial logo independently (name, (I) J)) in face_utils.FACIAL_LANDMARKS_IDXS.items (): # copy a copy of the original image to facilitate the drawing of the facial area, and its name clone = image.copy () cv2.putText (clone, name, (10,30), cv2.FONT_HERSHEY_SIMPLEX, 0.7,0,255) 2) # iterate through the points contained in each part of a separate facial logo and draw for (x, y) in shape [iRank j]: cv2.circle (clone, (x, y), 1, (0,0,255),-1) # to actually extract each facial region All we need to do is calculate the bounding box of the coordinates associated with a particular area. And use NumPy array slices to extract it: (x, y, w, h) = cv2.boundingRect (np.array ([Shape [I: j])) roi = image [YRV y + h, XRV x + w] # resize ROI area is 250 wide for better visualization roi = imutils.resize (roi, width=250) Inter=cv2.INTER_CUBIC) # displays the independent facial logo cv2.imshow ("ROI", roi) cv2.imshow ("Image", clone) cv2.waitKey (0) num = num + 1 p = os.path.sep.join ([temp_dir ") "{} .jpg" .format (str (num). Zfill (8)]) print ('p:', p) cv2.imwrite (p, output) # uses the visualize_facial_landmarks function to create a transparent overlay for each facial area. (transparent overlay) output = face_utils.visualize_facial_landmarks (image, shape) cv2.imshow ("Image", output) cv2.waitKey (0) so far, I believe you have a deeper understanding of "how to use OpenCV dlib to achieve face collection". You might as well do it in practice! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.