Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the counting function of road vehicles by using OpenCV

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

How to use OpenCV to achieve road vehicle counting function, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.

The code is as follows:

Import osimport loggingimport logging.handlersimport random

Import numpy as npimport skvideo.ioimport cv2import matplotlib.pyplot as plt

Import utils# without this some strange errors happencv2.ocl.setUseOpenCL (False) random.seed (123)

# = IMAGE_DIR = ". / out" VIDEO_SOURCE = "input.mp4" SHAPE = (720,1280) # HxW# = =

Def train_bg_subtractor (inst, cap, num=500):''BG substractor need process some amount of frames to start giving result 'print (' Training BG Subtractor...') I = 0 for frame in cap: inst.apply (frame, None, 0.001) I + = 1 if I > = num: return cap

Def main (): log = logging.getLogger ("main")

# creting MOG bg subtractor with 500 frames in cache # and shadow detction bg_subtractor = cv2.createBackgroundSubtractorMOG2 (history=500, detectShadows=True)

# Set up image source # You can use also CV2, for some reason it not working for me cap = skvideo.io.vreader (VIDEO_SOURCE)

# skipping 500frames to train bg subtractor train_bg_subtractor (bg_subtractor, cap, num=500)

Frame_number =-1 for frame in cap: if not frame.any (): log.error ("Frame capture failed, stopping...") Break

Frame_number + = 1 utils.save_frame (frame, ". / out/frame_d.png"% frame_number) fg_mask = bg_subtractor.apply (frame, None, 0.001) utils.save_frame (frame, ". / out/fg_mask_d.png"% frame_number) # = =

If _ _ name__ = = "_ main__": log = utils.init_logging ()

If not os.path.exists (IMAGE_DIR): log.debug ("Creating image directory `% s`...", IMAGE_DIR) os.makedirs (IMAGE_DIR)

Main ()

After processing, we get the following foreground image

Foreground image after removing background

We can see that there is some noise in the foreground image, which can be eliminated by standard filtering technology.

Filter

For our current situation, we will need the following filter functions: Threshold, Erode, Dilate, Opening, Closing.

First, we use "Closing" to remove gaps in the area, then "Opening" to remove individual pixels, and then "Dilate" to expand to thicken the object. The code is as follows:

Def filter_mask (img): kernel = cv2.getStructuringElement (cv2.MORPH_ELLIPSE, (2,2)) # Fill any small holes closing = cv2.morphologyEx (img, cv2.MORPH_CLOSE, kernel) # Remove noise opening = cv2.morphologyEx (closing, cv2.MORPH_OPEN, kernel) # Dilate to merge adjacent blobs dilation = cv2.dilate (opening, kernel, iterations=2) # threshold th = dilation [dilation

< 240] = 0 return th 处理后的前景如下: 利用轮廓进行物体检测 我们将使用cv2.findContours函数对轮廓进行检测。我们在使用的时候可以选择的参数为: cv2.CV_RETR_EXTERNAL------仅获取外部轮廓。 cv2.CV_CHAIN_APPROX_TC89_L1------使用Teh-Chin链逼近算法(更快) 代码如下: def get_centroid(x, y, w, h): x1 = int(w / 2) y1 = int(h / 2) cx = x + x1 cy = y + y1 return (cx, cy) def detect_vehicles(fg_mask, min_contour_width=35, min_contour_height=35): matches = [] # finding external contours im, contours, hierarchy = cv2.findContours( fg_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1) # filtering by with, height for (i, contour) in enumerate(contours): (x, y, w, h) = cv2.boundingRect(contour) contour_valid = (w >

= min_contour_width) and (h > = min_contour_height) if not contour_valid: continue # getting center of the bounding box centroid = get_centroid (x, y, w, h) matches.append ((x, y, w, h), centroid)) return matches

Establish a data processing framework

We all know that in ML and CV, no one algorithm can handle all the problems. Even if this algorithm exists, we will not use it because it is difficult to be effective on a large scale. A few years ago, for example, Netflix offered a reward of $3 million for the best movie recommendation algorithm. There is a team that accomplishes this task, but their recommendation algorithm cannot be run on a large scale, so it is of no use to the company. But Netflix still rewarded them with $1 million.

Next, let's establish a framework to solve the current problem, which can make the processing of the data more convenient.

Class PipelineRunner (object):''Very simple pipline. Just run passed processors in order with passing context from one to another. You can also set log level for processors. '' Def _ init__ (self, pipeline=None, log_level=logging.DEBUG): self.pipeline = pipeline or [] self.context = {} self.log = logging.getLogger (self.__class__.__name__) self.log.setLevel (log_level) self.log_level = log_level self.set_log_level () def set_context (self Data): self.context = data def add (self, processor): if not isinstance (processor, PipelineProcessor): raise Exception ('Processor should be an isinstance of PipelineProcessor.') Processor.log.setLevel (self.log_level) self.pipeline.append (processor) def remove (self, name): for I P in enumerate (self.pipeline): if p.classrooms classrooms. Roomnames _ = name: del self.pipeline [I] return True return False def set_log_level (self): for p in self.pipeline: p.log.setLevel (self.log_level) def run (self): For p in self.pipeline: self.context = p (self.context) self.log.debug ("Frame #% d processed." Self.context ['frame_number']) return self.context class PipelineProcessor (object):' 'Base class for processors. '' Def _ init__ (self): self.log = logging.getLogger (self.__class__.__name__)

First of all, we get a list of the running order of processors, let each processor complete part of the work, and complete the execution in order to get the final result.

We first create a contour detection processor. The contour detection processor only needs to combine the previous background deduction, filtering and contour detection parts, and the code is as follows:

Class ContourDetection (PipelineProcessor):''Detecting moving objects. Purpose of this processor is to subtrac background, get moving objects and detect them with a cv2.findContours method, and then filter off-by width and height. Bg_subtractor-background subtractor isinstance. Min_contour_width-min bounding rectangle width. Min_contour_height-min bounding rectangle height. Save_image-if True will save detected objects mask to file. Image_dir-where to save images (must exist). '' Def _ init__ (self, bg_subtractor, min_contour_width=35, min_contour_height=35, save_image=False, image_dir='images'): super (ContourDetection Self). _ init__ () self.bg_subtractor = bg_subtractor self.min_contour_width = min_contour_width self.min_contour_height = min_contour_height self.save_image = save_image self.image_dir = image_dir def filter_mask (self, img A=None):''This filters are hand-picked just based on visual tests' kernel = cv2.getStructuringElement (cv2.MORPH_ELLIPSE, (2,2)) # Fill any small holes closing = cv2.morphologyEx (img, cv2.MORPH_CLOSE, kernel) # Remove noise opening = cv2.morphologyEx (closing, cv2.MORPH_OPEN Kernel) # Dilate to merge adjacent blobs dilation = cv2.dilate (opening, kernel, iterations=2) return dilation def detect_vehicles (self, fg_mask, context): matches = [] # finding external contours im2, contours, hierarchy = cv2.findContours (fg_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1) for (I Contour) in enumerate (contours): (x, y, w, h) = cv2.boundingRect (contour) contour_valid = (w > = self.min_contour_width) and (h > = self.min_contour_height) if not contour_valid: continue centroid = utils.get_centroid (x, y, w) H) matches.append (x, y, w, h), centroid)) return matches def _ call__ (self, context): frame = context ['frame'] .copy () frame_number = context [' frame_number'] fg_mask = self.bg_subtractor.apply (frame, None) 0.001) # just thresholding values fg_ Mask [FG _ mask <] = 0 fg_mask = self.filter_mask (fg_mask, frame_number) if self.save_image: utils.save_frame (fg_mask, self.image_dir + "/ mask_d.png"% frame_number Flip=False) context ['objects'] = self.detect_vehicles (fg_mask, context) context [' fg_mask'] = fg_mask return contex

Now, let's create a processor that will find the same object detected on different frames, create a path, and count the vehicles arriving in the exit area. The code is as follows:

'' Counting vehicles that entered in exit zone.

Purpose of this class based on detected object and local cache create objects pathes and count that entered in exit zone defined by exit masks.

Exit_masks-list of the exit masks. Path_size-max number of points in a path. Max_dst-max distance between two points. ''

Def _ init__ (self, exit_masks= [], path_size=10, max_dst=30, x_weight=1.0, y_weight=1.0): super (VehicleCounter, self). _ _ init__ ()

Self.exit_masks = exit_masks

Self.vehicle_count = 0 self.path_size = path_size self.pathes = [] self.max_dst = max_dst self.x_weight = x_weight self.y_weight = y_weight

Def check_exit (self, point): for exit_mask in self.exit_masks: try: if exit_mask [point [1]] [point [0]] = = 255: return True except: return True return False

Def _ _ call__ (self, context): objects = context ['objects'] context [' exit_masks'] = self.exit_masks context ['pathes'] = self.pathes context [' vehicle_count'] = self.vehicle_count if not objects: return context

Points = np.array (objects) [:, 0:2] points = points.tolist ()

# add new points if pathes is empty if not self.pathes: for match in points: self.pathes.append ([match])

Else: # link new points with old pathes based on minimum distance between # points new_pathes = []

For path in self.pathes: _ min = 999999 _ match = None for p in points: if len (path) = = 1: # distance from last point to current d = utils.distance (p [0] Path [- 1] [0]) else: # based on 2 prev points predict next point and calculate # distance from predicted next point to current xn = 2 * path [- 1] [0] [0]-path [- 2] [0] [0] yn = 2 * path [- 1] [0] [1]-path [- 2] [0] [1] d = utils.distance (p [0] (xn, yn), x_weight=self.x_weight, y_weight=self.y_weight)

If d < _ min: _ min = d _ match = p

If _ match and _ min = 2 and # prev point not in exit zone not self.check_exit (d [0] [1]) and # current point in exit zone self.check_exit (d [1] [1]) and # path len is bigger then min self.path_size 1, then the last two points in the path are used That is, the new point is predicted on the same line and the minimum distance between the point and the current point is found.

The point with the minimum distance is added to the end of the current path and removed from the list. If there are some points left after that, we will add it as a new path. In the process, we will also limit the number of points in the path.

New_pathes = [] for path in self.pathes: _ min = 999999 _ match = None for p in points: if len (path) = = 1: # distance from last point to current d = utils.distance (p [0] Path [- 1] [0]) else: # based on 2 prev points predict next point and calculate # distance from predicted next point to current xn = 2 * path [- 1] [0] [0]-path [- 2] [0] [0] yn = 2 * path [- 1] [0] [1]-path [- 2] [0] [1] D = utils.distance (p [0]) (xn, yn), x_weight=self.x_weight Y_weight=self.y_weight) if d < _ min: _ min = d _ match = p if _ match and _ min = 2 and # prev point not in exit zone not self.check_exit (d [0] [1]) and # current point in exit zone Self.check_exit (d [1] [1]) and # path len is bigger then min self.path_size

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report