In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to use OpenCV and Python to build personnel counters related knowledge, the content is detailed and easy to understand, easy to operate, has a certain reference value, I believe you will learn something after reading this article on how to use OpenCV and Python to build personnel counters, let's take a look.
1. Understand object detection and object tracking
Before continuing with the rest of this tutorial, you must understand the fundamental difference between object detection and object tracking.
When we apply object detection, we are determining the position of an object in the image / frame. Compared with target tracking algorithms, target detectors are usually more expensive in computation and therefore slower. Examples of target detection algorithms include Haar cascade, HOG + linear support vector machine (HOG + Linear SVM) and target detectors based on deep learning, such as Faster R-CNN, YOLO and Single Shot detectors (SSD).
On the other hand, the object tracker will accept the input (x, y) coordinates of the position of the object in the image and will:
1. Assign a unique ID to this particular object
two。 Track the object as it moves around the video stream, and predict the position of the new object in the next frame according to the various attributes of the frame (gradient, optical flow, etc.)
Examples of object tracking algorithms include MedianFlow, MOSSE, GOTURN, kernel correlation filter and discriminant correlation filter.
two。 Combination of object detection and object tracking
High-precision target tracker combines the concepts of target detection and target tracking into one algorithm, which is usually divided into two stages:
1. Phase 1 detection: during the detection phase, we are running a more expensive object tracker to (1) detect whether new objects enter our field of vision, and (2) see if we can find objects that are "lost" in the tracking phase. For each detected object, we use the new bounding box coordinates to create or update the object tracker. Because of the higher computational cost of our target detector, we only run this phase once every N frames.
two。 Phase 2 tracking: when we are not in the "detection" phase, we are in the "tracking" phase. For each object we detect, we create an object tracker to track the movement of the object around the frame. Our target tracker should be faster and more efficient than the target detector. We will continue to track until we reach frame N, and then rerun our target detector. And then repeat the whole process.
The advantage of this hybrid method is that we can apply a highly accurate object detection method without too much computational burden. We will implement such a tracking system to set up our personnel counter.
3. Project structure
Let's review the project structure of today's blog posts. After you get the code, you can use the tree command to check the directory structure:
The two most important directories:
1. Pyimagesearchbind: this module includes centroid tracking algorithm. The centroid tracking algorithm is introduced in the part of "combined object tracking algorithm".
2. Mobilization: contains the Caffe deep learning model file.
The core of today's project is contained in the people_counter.py script-- this is where we will spend most of our time. Today we will also review the trackableobject.py script.
4. Combined with object tracking algorithm
To implement our personnel counter, we will use both OpenCV and dlib. We use OpenCV for standard computer vision / image processing functions, as well as deep learning object detectors for population statistics.
Then we will use dlib to implement the relevant filter. We can also use OpenCV; here, but for this project, the dlib object tracking implementation is easier to use.
In addition to dlib's object tracking implementation, we will also use centroid tracking implementation. Reviewing the entire centroid tracking algorithm is beyond the scope of this blog post, but I provide a brief overview below.
In step # 1, we accept a set of bounding boxes and calculate their corresponding centroids (that is, the center of the bounding box):
To use Python to build simple object tracking through centroid scripts, the first step is to accept bounding box coordinates and use them to calculate centroids.
The bounding box itself can be provided in any of the following ways:
1. Target detector (such as HOG + Linear SVM, Faster R-CNN, SSDs, etc.)
two。 Or object trackers (such as related filters)
In the figure above, you can see that we have two objects to track during the initial iteration of the algorithm.
In step # 2, we calculate the Euclidean distance between any new centroid (yellow) and the existing centroid (purple):
There are three objects in this image. We need to calculate the Euclidean distance between each pair of original centroids (purple) and new centroids (yellow).
The centroid tracking algorithm assumes that the centroid pair with the minimum Euclidean distance between them must be the same object ID.
In the example image above, we have two existing centroids (purple) and three new centroids (yellow), which means that a new object has been detected (because there is also a new centroid compared to the old centroid).
Then the arrow means to calculate the Euclidean distance between all purple centroids and all yellow centroids. Once we have the Euclidean distance, we will try to associate the object ID in step # 3:
You can see that our centroid trackers have chosen to associate centroids to minimize their respective Euclidean distances. But what about the point in the lower left corner? It has nothing to do with anything-what should we do? To answer this question, we need to perform step # 4 to register the new object:
Registration means that we add new objects to our tracking object list in the following ways:
1. Assign it a new object ID
two。 Stores the centroid of the bounding box coordinates of the new object
If the object is lost or out of view, we can simply unregister the object (step # 5).
5. Create a traceable object
In order to track and calculate objects in a video stream, we need a simple way to store information about the objects themselves, including:
Object ID
The previous centroid (so we can easily calculate the direction in which the object is moving)
Whether the object has been counted
To achieve all these goals, we can define an instance of TrackableObject-- open the trackableobject.py file and insert the following code:
Class TrackableObject: def _ init__ (self, objectID, centroid): # store the objectID, then initialize a list of centroids # using the current centroid self.objectID = objectID self.centroids = [centroid] # initialize a boolean used to indicate if the object has # already been counted or not self.counted = False
The TrackableObject constructor accepts objectID + centroid and stores them. The centroids variable is a list because it will contain the centroid location history of the object. The constructor also initializes counted to False, indicating that the object has not been counted.
6. Using OpenCV + Python to implement our personnel counter # import the necessary packagesfrom pyimagesearch.centroidtracker import CentroidTrackerfrom pyimagesearch.trackableobject import TrackableObjectfrom imutils.video import VideoStreamfrom imutils.video import FPSimport numpy as npimport argparseimport imutilsimport timeimport dlibimport cv2
Let's first import the necessary packages:
From the pyimagesearch module, we import custom CentroidTracker and TrackableObject classes.
The VideoStream and FPS modules in imutils.video will help us use the webcam and calculate the estimated frames per second (FPS) throughput.
We need the OpenCV convenience of imutils.
The dlib library will be used for its related tracker implementation.
OpenCV will be used for deep neural network reasoning, opening video files, writing video files, and displaying output frames on our screen.
Now that all the tools are within reach, let's parse the command line arguments:
# construct the argument parse and parse the argumentsap = argparse.ArgumentParser () ap.add_argument ("- p", "- prototxt", required=True, help= "path to Caffe 'deploy' prototxt file") ap.add_argument ("- m", "- model", required=True, help= "path to Caffe pre-trained model") ap.add_argument ("- I", "- input", type=str, help= "path to optional input video file") ap.add_argument ("- o", "- output" Type=str, help= "path to optional output video file") ap.add_argument ("- c", "- confidence", type=float, default=0.4, help= "minimum probability to filter weak detections") ap.add_argument ("- s", "- skip-frames", type=int, default=30, help= "# of skip frames between detections") args = vars (ap.parse_args ())
We have six command-line arguments that allow us to pass information from the terminal to our personnel counter script at run time:
-- prototxt: the path where Caffe deploys the prototxt file.
-- model: the path of Caffe pre-training CNN model.
-- input: optional input video file path. If no path is specified, your webcam will be used.
-- output: optional output video path. If no path is specified, the video will not be recorded.
-- confidence: the default is 0.4, which is the minimum probability threshold that helps filter out weak detection.
-- skip-frames: the number of frames to skip before running our DNN detector again on the tracking object. Keep in mind that object detection is expensive to calculate, but it does help our tracker reevaluate the
Object. By default, we skip 30 frames between objects detected using the OpenCV DNN module and our CNN single detector model.
Now that our script can dynamically process command-line arguments at run time, let's prepare our SSD:
# initialize the list of class labels MobileNet SSD was trained to detectCLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train" "tvmonitor"] # load our serialized model from diskprint ("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe (args ["prototxt"], args ["model"])
First, we will initialize the list of classes supported by CLASSES--SSD. We are only interested in the "people" class, but you can also calculate other moving objects.
We load the pre-trained MobileNet SSD for detecting objects (but again, we are only interested in detecting and tracking people, not any other class).
We can initialize our video stream:
# if a video path was not supplied, grab a reference to the webcamif not args.get ("input", False): print ("[INFO] starting video stream...") Vs = VideoStream (src=0). Start () time.sleep (2.0) # otherwise, grab a reference to the video fileelse: print ("[INFO] opening video file...") Vs = cv2.VideoCapture (args ["input"])
First, we deal with the use of webcam video streaming. Otherwise, we will capture the frame from the video file. Before we start looping the frame, we still have some initialization to perform:
# initialize the video writer (we'll instantiate later if need be) writer = None# initialize the frame dimensions (we'll set them as soon as we read# the first frame from the video) W = NoneH = None# instantiate our centroid tracker, then initialize a list to store# each of our dlib correlation trackers, followed by a dictionary to# map each unique object ID to a TrackableObjectct = CentroidTracker (maxDisappeared=40, maxDistance=50) trackers = [] trackableObjects = {} # initialize the total number of frames processed thus far Along# with the total number of objects that have moved either up or downtotalFrames = 0totalDown = 0totalUp = start the frames per second throughput estimatorfps = FPS () .start ()
The rest of the initialization includes:
Writer: our video writer. If we are writing a video, we will instantiate this object later.
W and H: our frame size. We need to insert these into the cv2.VideoWriter.
Ct: our CentroidTracker.
Trackers: stores a list of dlib-related trackers.
TrackableObjects: a dictionary that maps objectID to TrackableObject.
TotalFrames: total number of frames processed.
TotalDown and totalUp: the total number of objects / people moving down or up.
Fps: the frame per second estimator we used for the benchmark.
Now that we have all the initialization done, let's loop through the incoming frames:
# loop over frames from the video streamwhile True: # grab the next frame and handle if we are reading from either # VideoCapture or VideoStream frame = vs.read () frame = frame [1] if args.get ("input" False) else frame # if we are viewing a video and we did not grab a frame then we # have reached the end of the video if args ["input"] is not None and frame is None: break # resize the frame to have a maximum width of 500 pixels (the # less data we have, the faster we can process it), then convert # the frame from BGR to RGB for dlib frame = imutils.resize (frame, width=500) rgb = cv2.cvtColor (frame Cv2.COLOR_BGR2RGB) # if the frame dimensions are empty, set them if W is None or H is None: (H, W) = frame.shape [: 2] # if we are supposed to be writing a video to disk, initialize # the writer if args ["output"] is not None and writer is None: fourcc = cv2.VideoWriter_fourcc (* "MJPG") writer = cv2.VideoWriter (args ["output"], fourcc, 30, (W, H) True)
We start the cycle. At the top of the loop, we grab the next frame. If we have reached the end of the video, we will jump out of the loop.
The frame is preprocessed. This includes resizing and swapping color channels because dlib requires rgb images. We get the frame size for the video writer. If the output path is provided through command-line arguments, we will instantiate the video writer from there.
Now let's use SSD to detect people:
# initialize the current status along with our list of bounding # box rectangles returned by either (1) our object detector or # (2) the correlation trackers status = "Waiting" rects = [] # check to see if we should run a more computationally expensive # object detection method to aid our tracker if totalFrames% args ["skip_frames"] = = 0: # set the status and initialize our new set of object trackers status = "Detecting" trackers = [] # convert the frame to a blob and pass the blob through the # network and obtain the detections blob = cv2.dnn.blobFromImage (frame 0.007843, (W, H), 127.5) net.setInput (blob) detections = net.forward ()
We initialize the state to Waiting. Possible states include:
Waiting: in this state, we are waiting for detection and tracking personnel.
Detecting: we are using MobileNet SSD testers.
Tracking: people are being tracked in frames, and we are calculating totalUp and totalDown.
Our rects list will be populated by detection or tracking. Let's continue to initialize rects.
It is important to understand that deep learning object detectors are very expensive to calculate, especially if you are running them on CPU.
To avoid running our target detector on each frame and speed up our tracking pipeline, we will skip N frames (set by the command line parameter-- skip-frames, where 30 is the default). Only every N frames do we use SSD for object detection. Otherwise, we will just track the moving objects in the middle.
Using the modular operator, we ensure that the code in the if statement is executed every N frames. After entering the if statement, we will update the status to Detecting. Then we initialize the new tracker list.
Next, we will reason through object detection. We first create a blob from the image, and then pass the blob over the network for detection. Now we will iterate through each test, hoping to find the object that belongs to the person class:
# loop over the detections for i in np.arange (0, detections.shape [2]): # extract the confidence (i.e., probability) associated # with the prediction confidence = detections [0,0, I 2] # filter out weak detections by requiring a minimum # confidence if confidence > args ["confidence"]: # extract the index of the class label from the # detections list idx = int (detections [0,0, I, 1]) # if the class label is not a person Ignore it if CLASSES [idx]! = "person": continue
By cyclic testing, we continue to gain confidence and filter out results that are not human.
Now we can calculate a bounding box for everyone and start correlation tracking:
# compute the (x, y)-coordinates of the bounding box # for the object box = detections [0,0, I, 3:7] * np.array ([W, H, W, H]) (startX, startY, endX EndY) = box.astype ("int") # construct a dlib rectangle object from the bounding # box coordinates and then start the dlib correlation # tracker tracker = dlib.correlation_tracker () rect = dlib.rectangle (startX, startY, endX, endY) tracker.start_track (rgb Rect) # add the tracker to our list of trackers so we can # utilize it during skip frames trackers.append (tracker)
Calculate our box. Then instantiate our dlib correlation tracker, pass the object's bounding box coordinates to dlib.rectangle, and store the result as rect. Then we start tracking and attach the tracker to the tracker list. This is the encapsulation of all the operations we perform every N skip frames! Let's deal with the typical operation of tracing in an else block:
# otherwise We should utilize our object * trackers* rather than # object * detectors* to obtain a higher frame processing throughput else: # loop over the trackers for tracker in trackers: # set the status of our system to be 'tracking' rather # than' waiting' or 'detecting' status = "Tracking" # update the tracker and grab the updated position tracker.update (rgb) Pos = tracker.get_position () # unpack the position object startX = int (pos.left ()) startY = int (pos.top ()) endX = int (pos.right ()) endY = int (pos.bottom ()) # add the bounding box coordinates to the rectangles list rects.append ((startX StartY, endX, endY))
Most of the time, it doesn't happen on the skip frame multiple. In the meantime, we will use the tracker to track objects instead of applying detection. We started traversing the available trackers. We continue to update the status to Tracking and get the location of the object. We extract the location coordinates and populate our rects list with information. Now let's draw a horizontal visualization line (people have to pass through it to be tracked) and use the centroid tracker to update our object's centroid:
# draw a horizontal line in the center of the frame-once an # object crosses this line we will determine whether they were # moving 'up' or' down' cv2.line (frame, (0, H / / 2), (W, H / / 2), (0,255,255), 2) # use the centroid tracker to associate the (1) old object # centroids with (2) the newly computed object centroids objects = ct.update (rects)
We draw a horizontal line, and we will use it to visualize people "crossing"-- once people cross this line, we will increase their respective counters, and then we use CentroidTracker instantiation to accept rects lists, whether they are generated through object detection or object tracking. Our centroid tracker associates the object ID with the object location. In the next code block, we will review the logic of a person moving up or down in a frame:
# loop over the tracked objects for (objectID, centroid) in objects.items (): # check to see if a trackable object exists for the current # objectID to = trackableObjects.get (objectID, None) # if there is no existing trackable object, create one if to is None: to = TrackableObject (objectID, centroid) # otherwise There is a trackable object so we can utilize it # to determine direction else: # the difference between the y-coordinate of the * current* # centroid and the mean of * previous* centroids will tell # us in which direction the object is moving (negative for # 'up' and positive for' down') y = [c [1] for c in to.centroids] Direction = centroid [1]-np.mean (y) to.centroids.append (centroid) # check to see if the object has been counted or not if not to.counted: # if the direction is negative (indicating the object # is moving up) AND the centroid is above the center # line Count the object if direction
< 0 and centroid[1] < H // 2: totalUp += 1 to.counted = True # if the direction is positive (indicating the object # is moving down) AND the centroid is below the # center line, count the object elif direction > < 0 and centroid[1] < H // 2: totalUp += 1 to.counted = True # 如果方向为正(表示物体正在向下移动)并且质心低于中心线,则计算物体 elif direction >0 and centroid [1] > H / / 2: totalDown + = 1 to.counted = True # stores traceable objects in our dictionary trackableObjects [objectID] = to # draws the ID and the objects on the output frame The centroid of the object text = "ID {}" .format (objectID) cv2.putText (frame Text, (centroid [0]-10, centroid [1]-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0,255,0), 2) cv2.circle (frame, (centroid [0], centroid [1]), 4, (0,255,0) -1) # build the information tuples we will display on the frame info = [("Up", totalUp), ("Down", totalDown), ("Status", status),] # traverse the information tuples and draw them on our frame for (I, (k) V) in enumerate (info): text = "{}: {}" .format (k, v) cv2.putText (frame, text, (10, H-((I * 20) + 20)), cv2.FONT_HERSHEY_SIMPLEX, 0.6,0,255) 2) # check whether we should write the frame to disk if writer is not None: writer.write (frame) # display the output frame cv2.imshow ("Frame", frame) key = cv2.waitKey (1) & 0xFF # if the'Q 'key is pressed Break loop if key = = ord ("Q"): break # increase the total number of frames processed so far, and then update the FPS counter totalFrames + = 1 fps.update () # stop timer to display the FPS message fps.stop () print ("[INFO] elapsed time: {: .2f}" .format (fps.elapsed ()) print ("[INFO] approx. FPS: {: .2f} ".format (fps.fps ()) # check if we need to release the video writer pointer if writer is not None: writer.release () # if we do not use video files, please stop the camera video stream if not args.get (" input ", False): vs.stop () # otherwise Release video file pointer else: vs.release () # close all open windows cv2.destroyAllWindows () centroidtracker.py
(1) the centroid tracker is one of the most reliable trackers.
(2) for simplicity, the centroid tracker calculates the centroid of the bounding box.
(3) that is, the bounding box is the (x, y) coordinate of the object in the image.
(4) once our SSD obtains the coordinates, the tracker calculates the center of mass (center) of the bounding box. In other words, it is the center of the object.
(5) each detected specific object is then assigned a unique ID to track the frame sequence.
From scipy.spatial import distance as distfrom collections import OrderedDictimport numpy as npclass CentroidTracker: def _ init__ (self, maxDisappeared=50, maxDistance=50): # initialize the next unique object ID and use two ordered dictionaries to track the mapping of a given object ID to its centroid # and the number of consecutive frames it is marked as "disappear" self.nextObjectID = 0 self.objects = OrderedDict () self.disappeared = OrderedDict () # stores the maximum number of consecutive frames allowed to be marked as "disappear" for a given object Until we need to unregister the object from the trace self.maxDisappeared = maxDisappeared # the maximum distance between storage centroids to associate the object-- if the distance is greater than this maximum distance, we start marking the object as "disappear" self.maxDistance = maxDistance def register (self, centroid): # when registering the object We use the next available object ID to store the centroid self.objects [self.nextObjectID] = centroid self.disappeared [self.nextObjectID] = 0 self.nextObjectID + = 1 def deregister (self, objectID): # to log out the object ID We delete the object ID del self.objects [objectID] del self.disappeared [objectID] def update (self) from their respective dictionaries Rects): # check that the input bounding box rectangle list is empty if len (rects) = = 0: # cycle through any existing tracking objects and mark them as vanishing for objectID in list (self.disappeared.keys ()): Self.disappeared [objectID] + = 1 # if we have reached the maximum number of consecutive frames that a given object is marked to disappear Unregister it if self.disappeared [objectID] > self.maxDisappeared: self.deregister (objectID) # return early Because there is no centroid or tracking information to update return self.objects # initialize the input centroid array of the current frame inputCentroids = np.zeros ((len (rects), 2), dtype= "int") # cyclic bounding box rectangle for (I, (startX, startY, endX) EndY)) in enumerate (rects): # derive the centroid cX = int ((startX + endX) / 2.0) cY = int ((startY + endY) / 2.0) inputCentroids [I] = (cX) using bounding box coordinates CY) # if we are not currently tracking any objects Then get the input centroids and register each of them if len (self.objects) = = 0: for i in range (0, len (inputCentroids): self.register (inputCentroids [I]) # otherwise, we are currently tracking objects So we need to try to match the input centroid with the existing object centroid else: # get a set of object ID and the corresponding centroid objectIDs = list (self.objects.keys ()) objectCentroids = list (self.objects.values ()) # Calculate the distance between each pair of object centroids and the input centroids separately-our goal is to match the input centroids with the existing object centroids D = dist.cdist (np.array (objectCentroids)) InputCentroids) # in order to perform this match We must (1) find the minimum value in each row, # and then (2) sort the row index according to their minimum value so that the row with the lowest value is located in the index list * front* rows = D.min (axis=1). Argsort () # next, we perform a similar process on the column The method is to find the smallest value in each column, # and then sort cols = D.argmin (axis=1) [rows] # using the previously calculated row index list in order to determine whether we need to update, register or unregister an object We need to track the row and column indexes that we have checked usedRows = set () usedCols = set () # Loop through (row Column) index tuple combination for (row, col) in zip (rows, cols): # if we have previously checked row or column values Please ignore it if row in usedRows or col in usedCols: continue # if the distance between the centroids is greater than the maximum distance Do not associate two centroids to the same object if D [row, col] > self.maxDistance: continue # otherwise, get the object ID of the current line and set its new centroid And reset the vanishing counter objectID = objectIDs [row] self.objects [objectID] = inputCentroids [col] self.disappeared [objectID] = 0 # indicating that we have checked each row and column index separately UsedRows.add (row) usedCols.add (col) # calculates the row and column indexes unusedRows = set (range (0) that we haven't checked yet D.shape [0]) .difference (usedRows) unusedCols = set (range (0, D.shape [1])) .difference (usedCols) # if the number of centroids of the object is equal to or greater than the number of input centroids # We need to check and see if some of these objects may have disappeared if D.shape [0] > = D.shape [1]: # Loop unused row index for row in unusedRows: # get the object ID indexed by the corresponding row and increase the vanishing counter objectID = objectIDs [row] self.disappeared [objectID] + = 1 # check whether the number of consecutive frames of the object is marked "disappear" To log out of the object if self.disappeared [objectID] > self.maxDisappeared: self.deregister (objectID) # otherwise, if the number of input centroids is greater than the number of existing object centroids We need to register each new input centroid as a traceable object else: for col in unusedCols: self.register (inputCentroids [col]) # return the collection return self.objectstrackableobject.pyclass TrackableObject of traceable objects : def _ _ init__ (self ObjectID, centroid): # Storage object ID Then initialize a Boolean value using the current centroid initialization list self.objectID = objectID self.centroids = [centroid] # to indicate whether the object has been counted self.counted = False8. Running result
Open the terminal and execute the following command:
Python people_counter.py-prototxt mobilenet_ssd/MobileNetSSD_deploy.prototxt\-model mobilenet_ssd/MobileNetSSD_deploy.caffemodel\-input videos/example_01.mp4-output output/output_01.avi
Our personnel count is calculating the following:
Entering the department store (part two)
Number of people leaving (part one)
At the end of the first video, you will see seven people entering and three people leaving.
In addition, check the terminal output and you will find that our human counter can run in real time, up to 34 frames per second. Although we are using deep learning object detectors to detect people more accurately.
Our 34 FPS frame rate is achieved through our two-stage process: detecting people every 30 frames and then applying a faster and more efficient object tracking algorithm to all frames in between.
9. Improve our personnel counter application
To build our OpenCV staff counter, we used dlib's correlation tracker. This method is easy to use and requires very little code.
However, our implementation is a bit inefficient-in order to track multiple objects, we need to create multiple instances of the associated tracker object. Then when we need to calculate the position of the object in subsequent frames, we need to traverse all N object trackers and get the updated location.
All of these calculations will occur in the main thread of execution of our script, which slows down our FPS rate.
Therefore, an easy way to improve performance is to use dlib's multi-object tracker to increase our FPS rate by 45%! Note: OpenCV also implements multi-object tracking, but not multiple processes (at least as of this writing). OpenCV's multi-object approach is certainly easier to use, but it doesn't help much in this case if you don't have multiprocessing power.
Finally, for higher tracking accuracy (but at the expense of speed without fast GPU), you can study object trackers based on deep learning, such as Deep SORT.
This is the end of the article on "how to build staff counters using OpenCV and Python". Thank you for reading! I believe you all have some knowledge of "how to use OpenCV and Python to build personnel counters". If you want to learn more, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.