In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to achieve face tracking with Python+Opencv". In daily operation, I believe many people have doubts about how to achieve face tracking with Python+Opencv. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "how to achieve face tracking with Python+Opencv"! Next, please follow the editor to study!
Preface
Face processing is a hot topic in artificial intelligence, face processing can use computer vision algorithms to automatically extract a large amount of information from the face, such as identity, intention and emotion; target tracking attempts to estimate the trajectory of the target in the whole video sequence, in which only the initial position of the target is known, the combination of the two will produce many interesting applications. Face tracking is very challenging due to many factors, such as appearance change, occlusion, fast motion, motion blur, scale change and so on.
Brief introduction of face tracking Technology
The visual tracker based on discriminant correlation filter (discriminative correlation filter, DCF) has excellent performance and high computational efficiency, and can be used in real-time applications. DCF tracker is a very popular method based on bounding box tracking.
A DCF-based tracker is implemented in the dlib library, which can be easily used for object tracking. In this paper, we will introduce how to use this tracker to track faces and user-selected objects. This method is also known as Discriminant scale Space Tracker (Discriminative Scale Space Tracker, DSST). The tracker only needs to enter the bounding box of the original video and the initial position of the target, and then the tracker automatically predicts the trajectory of the target.
Face tracking using dlib DCF-based tracker
In face tracking, we first use dlib face detector for initialization, and then use dlib DCF-based tracker DSST for face tracking. Call the following function to initialize the related tracker:
Tracker = dlib.correlation_tracker ()
This initializes the tracker with default values (filter_size = 6, num_scale_levels = 5, scale_window_size = 23, regularizer_space = 0.001, nu_space = 0.025, regularizer_scale = 0.001, nu_scale = 0.025, scale_pyramid_alpha = 1.020). The higher the values of filter_size and num_scale_levels, the higher the tracking accuracy, but it also requires more computing power; the recommended values for filter_size are 5, 6, and 7, respectively. The recommended values are 4, 5, and 6.
Use tracker.start_track () to start tracing. Before we start tracking, we need to perform face detection and pass the detected face location to this method:
If tracking_face is False: gray = cv2.cvtColor (frame, cv2.COLOR_BGR2GRAY) # try to detect a face to initialize the tracker rects = detector (gray, 0) # check if a face is detected if len (rects) > 0: # start tracking tracker.start_track (frame, rects [0]) tracking_face = True
When a human face is detected, the face tracker will begin to track the contents of the bounding box. To update the location of the tracked object, you need to call the tracker.update () method:
Tracker.update (frame)
The tracker.update () method updates the tracker and returns a measure of the tracker's confidence, which can be used to reinitialize the tracker using face detection.
To get the location of the tracked object, you need to call the tracker.get_position () method:
Pos = tracker.get_position ()
The tracker.get_position () method returns the location of the tracked object. Finally, draw the predicted position of the face:
Cv2.rectangle (frame, (int (pos.left ()), int (pos.top (), (int (pos.right ()), int (pos.bottom (), (0,255,0), 3)
The following figure shows the tracking process of the face tracking algorithm:
In the image above, you can see that the algorithm is currently tracking the detected face, and you can also press the number 1 to reinitialize the tracking.
Complete code
The complete code is shown below, and we need to provide the option to reinitialize the tracker when the number 1 is pressed.
Import cv2import dlibdef draw_text_info (): # where the text is drawn menu_pos_1 = (10,20) menu_pos_2 = (10,40) # drawing menu information cv2.putText (frame, "Use'1' to re-initialize tracking", menu_pos_1, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255)) if tracking_face: cv2.putText (frame, "tracking the face" Menu_pos_2, cv2.FONT_HERSHEY_SIMPLEX, 0.5,0,255,0) else: cv2.putText (frame, "detecting a face to initialize tracking...", menu_pos_2, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0) # create video capture object capture = cv2.VideoCapture (0) # load face detector detector = dlib.get_frontal_face_detector () # initialize tracker tracker = dlib.correlation_tracker () # whether you are currently tracking face tracking_face = Falsewhile True: # capture video frame ret Frame = capture.read () # drawing basic information draw_text_info () if tracking_face is False: gray = cv2.cvtColor (frame, cv2.COLOR_BGR2GRAY) # try to detect faces to initialize the tracker rects = detector (gray 0) # decide whether to start tracking if len (rects) > 0: # Start tracking: tracker.start_track (frame) by judging whether a face is detected Rects [0]) tracking_face = True if tracking_face is True: # Update the tracker and print the confidence of the measurement tracker print (tracker.update (frame)) # get the location of the tracked object pos = tracker.get_position () # draw the location of the tracked object cv2.rectangle (frame, (int (pos.left () Int (pos.top ()), (int (pos.right ()), int (pos.bottom (), (0,255,0) 3) # capture keyboard event key = 0xFF & cv2.waitKey (1) # initialize the tracker if key = = ord ("1"): tracking_face = False # exit if key by Q = = ord ('q'): break # displays the result cv2.imshow ("Face tracking using dlib frontal face detector and correlation filters for tracking") Frame) # release all resources capture.release () cv2.destroyAllWindows () use a dlib DCF-based tracker for object tracking
Except for human face, dlib DCF-based tracker can be used to track any object. Next, we use the mouse to select objects to track and listen for keyboard events. If we press 1, we will start tracking objects within the predefined bounding box; if we press 2, the predefined bounding box will be emptied and the tracking algorithm will stop. and wait for the user to select another bounding box.
For example, if we are not interested in detecting little sisters, but prefer cats, we can first use the mouse to draw a rectangular box to select cats, then press 1 to start tracking kittens, and if we want to track other objects, press 2 to redraw the rectangle and track it. As shown below, we can see that the algorithm tracks the object and outputs it in real time:
Complete code
The complete code is as follows:
Import cv2import dlibdef draw_text_info (): # where the text is drawn menu_pos_1 = (10,20) menu_pos_2 = (10,40) menu_pos_3 = (10,60) # menu item info_1 = "Use left click of the mouse to select the object to track" info_2 = "Use'1' to start tracking '2' to reset tracking and' q' to exit "# drawing menu information cv2.putText (frame," Use'1' to re-initialize tracking ", menu_pos_1, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255)) cv2.putText (frame, info_2, menu_pos_2, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255) if tracking_state: cv2.putText (frame "tracking", menu_pos_3, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,255,0) else: cv2.putText (frame, "not tracking", menu_pos_3, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,255)) # structure used to hold the coordinates of the object to be tracked points = [] def mouse_event_handler (event, x, y, flags) Param): # references to global variables global points # add the upper-left coordinates of the object to be tracked if event = = cv2.EVENT_LBUTTONDOWN: points = [(x, y)] # add the lower-right coordinates of the object to be tracked: elif event = = cv2.EVENT_LBUTTONUP: points.append ((x) Y)) # create video capture object capture = cv2.VideoCapture (0) # window name window_name = "Object tracking using dlib correlation filter algorithm" # create window cv2.namedWindow (window_name) # bind mouse event cv2.setMouseCallback (window_name, mouse_event_handler) # initialize tracker tracker = dlib.correlation_tracker () tracking_state = Falsewhile True: # capture video frame ret Frame = capture.read () # draw menu item draw_text_info () # set and draw a rectangle Track objects within the rectangle if len (points) = = 2: cv2.rectangle (frame, points [0], points [1], (0,0,255), 3) dlib_rectangle = dlib.rectangle (points [0] [0], points [0] [1], points [1] [0] Points [1] [1]) if tracking_face is True: # Update the tracker and print the confidence of the measurement tracker print (tracker.update (frame)) # get the location of the tracked object pos = tracker.get_position () # draw the location of the tracked object cv2.rectangle (frame, (int (pos.left ()), int (pos.top () (int (pos.right ()), int (pos.bottom (), (0,255,0), 3) # capture keyboard event key = 0xFF & cv2.waitKey (1) # Press 1 key Start tracking if key = = ord ("1"): if len (points) = = 2: # Start tracking: tracker.start_track (frame, dlib_rectangle) tracking_state = True points = [] # Press 2 key to stop tracking if key = = ord ("2"): points = [] tracking_state = False # Press Q key Return if key = = ord ('q'): break # display the resulting image cv2.imshow (window_name, frame) # release the resource capture.release () cv2.destroyAllWindows () here, and the study on "how Python+Opencv implements face tracking" is over, hoping to solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.