Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

An example Analysis of Raspberry pie Smart car combined with camera opencv for object tracking

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Raspberry pie smart car combined with camera opencv for object tracking example analysis, I believe that many inexperienced people do not know what to do, so this article summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.

After several days of data collation, it is found that it is realized by using opencv and python. So today I will show you how to install opencv3.0 and how to use it to track my car.

It is true that several previous installations of opencv have fallen on the road to cmake compilation, but if there is a problem, it has to be solved. After going through several posts, I finally found a reliable one. It took a whole afternoon to install it successfully. The installed tutorials are too long and can easily be thought of as plagiarism in the headlines, so send them to the comments section. Then the question arises, how to achieve object tracking when opencv is installed. I started looking for the case list on github, searching and searching, typing the keyword track car raspberry, finding one, and opening it to see if it was made of raspberry pie and arduino. Fortunately, arduino is only used to control the stepper motor. I started to transplant the part of the raspberry pie gpio that controls the motor to this project. After a day of debugging, a modified version of the raspberry pie object tracking car was released. How to say, this is only a prototype, because the car steering is not sensitive enough, the tracking function needs to be further optimized. Personal level is limited, I hope you will study it together.

Let's talk about the source code of detect.py car object tracking. How to achieve object tracking in detect.py? first of all, it needs to capture a frame frame and determine an object to track. After determining the object to be tracked, the car will keep tracking the object. The front, back, left, right and stop actions are defined in the source code. When the locked object moves, the car responds according to the position of the object, that is, tracking the progress of the object.

Attached detect.py source code:

# Import some necessary packages

From picamera.array import PiRGBArray

From picamera import PiCamera

Import cv2

Import serial

Import syslog

Import time

Import numpy as np

Import RPi.GPIO as GPIO

# define the screen size to be captured

Width = 320

Height = 240,

Tracking_width = 40

Tracking_height = 40

Auto_mode = 0

# define the left and right functions before and after the car as follows

Def t_stop ():

GPIO.output (11, False)

GPIO.output (12, False)

GPIO.output (15, False)

GPIO.output (16, False)

Def t_up ():

GPIO.output (11, True)

GPIO.output (12, False)

GPIO.output (15, True)

GPIO.output (16, False)

Time.sleep (0.05)

GPIO.output (11, False)

GPIO.output (12, False)

GPIO.output (15, False)

GPIO.output (16, False)

Time.sleep (0.3)

Def t_down ():

GPIO.output (11, False)

GPIO.output (12, True)

GPIO.output (15, False)

GPIO.output (16, True)

Def t_left ():

GPIO.output (11, False)

GPIO.output (12, True)

GPIO.output (15, True)

GPIO.output (16, False)

Time.sleep (0.05)

GPIO.output (11, False)

GPIO.output (12, False)

GPIO.output (15, False)

GPIO.output (16, False)

Time.sleep (0.3)

Def t_right ():

GPIO.output (11, True)

GPIO.output (12, False)

GPIO.output (15, False)

GPIO.output (16, True)

Time.sleep (0.05)

GPIO.output (11, False)

GPIO.output (12, False)

GPIO.output (15, False)

GPIO.output (16, False)

Time.sleep (0.3)

Def t_open ():

GPIO.setup (22J GPIO. Out)

GPIO.output (22J GPIO. Low)

Def t_close ():

GPIO.setup (22J GPIO.IN)

Def check_for_direction (position_x):

GPIO.setmode (GPIO.BOARD)

GPIO.setwarnings (False)

GPIO.setup (11century GPIO. Out)

GPIO.setup (1251 GPIO. Out)

GPIO.setup (1551 GPIO. Out)

GPIO.setup (16cr GPIO. Out)

GPIO.setup (38cr GPIO. Out)

If position_x = 0 or position_x = = width:

Print 'out of bound'

T_stop ()

If position_x = (width-tracking_width) / 2 + tracking_width):

Print 'move leftkeeper'

T_left ()

Else:

# print 'move front'

T_up ()

# initialize the camera and grab a reference to the raw camera capture

Camera = PiCamera ()

Independent of picture and text

Camera.resolution = (width, height)

Camera.framerate = 32

RawCapture = PiRGBArray (camera, size= (width, height))

RawCapture2 = PiRGBArray (camera, size= (width, height))

# allow the camera to warmup

Time.sleep (0.1)

# set the ROI (Region of Interest)

Height/2 tracking_width/2, tracking_width, tracking_height

Track_window = (c _ r _ h)

# capture single frame of tracking image

Camera.capture (rawCapture2, format='bgr')

# create mask and normalized histogram

Roi = rawCapture2.array [r:r+h, c:c+w]

Hsv_roi = cv2.cvtColor (roi, cv2.COLOR_BGR2HSV)

Mask = cv2.inRange (hsv_roi, np.array ([0recom 30,032]), np.array ([180255255]))

Roi_hist = cv2.calcHist ([hsv_roi], [0], mask, [180], [0180])

Cv2.normalize (roi_hist, roi_hist, 0,255, cv2.NORM_MINMAX)

Term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 80,1)

# capture frames from the camera

For frame in camera.capture_continuous (rawCapture, format='bgr', use_video_port=True):

# grab the raw NumPy array representing the image, then initialize the timestamp

# and occupied/unoccupied text

Image = frame.array

# filtering for tracking algorithm

Hsv = cv2.cvtColor (image, cv2.COLOR_BGR2HSV)

Dst = cv2.calcBackProject ([hsv], [0], roi_hist, [0180], 1)

Ret, track_window = cv2.meanShift (dst, track_window, term_crit)

X _

Cv2.rectangle (image, (xQuery y), (xqiwjinyangh), 255,2)

Cv2.putText (image, 'Tracked', (Xmi 25, y Mel 10), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,255,255), 2)

# show the frame

Cv2.imshow ("Raspberry Pi RC Car", image)

Key = cv2.waitKey (1) & 0xFF

Check_for_direction (x)

Time.sleep (0. 01)

# clear the stream in preparation for the next frame

RawCapture.truncate (0)

After reading the above, have you mastered the method of raspberry smart car combined with camera opencv for example analysis of object tracking? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report