In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you what the squat detector based on OpenCV and Tensorflow is like, the content is concise and easy to understand, it can definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
During the quarantine period, our sports activities are very limited, which is not good. When doing some home-based sports, we must always maintain a high degree of concentration in order to record the amount of exercise we do every day. Therefore, we hope to establish an automatic system to calculate the amount of exercise. Considering that when we squat, there are clear stages and substantial changes in the basic movement, to achieve the count of squats will be relatively simple.
Let's try to achieve it together.
data acquisition
It is very convenient to use Raspberry Pi with camera to get pictures. After completing the shooting of images, you can use OpenCV to write the acquired images to the file system.
Motion recognition
Initially, we intend to use image segmentation to complete the character extraction work. But we all know that image segmentation is a very tedious operation, especially in the case of limited Raspberry resources.
In addition, image segmentation ignores a fact. What we currently have is a series of image frames, not a single picture. This image sequence has obvious functions, and we will use it later.
Therefore, we start with OpenCV to remove the background to provide reliable results.
Background deduction
First, create a background subtracter:
BackSub = cv.createBackgroundSubtractorMOG2 ()
Add image frames to it:
Mask = backSub.apply (frame)
Finally, we can get a picture with the outline of the body:
Then enlarge the image to highlight the outline.
Mask = cv.dilate (mask, None, 3)
Applying this algorithm to all image frames, the pose in each image can be obtained. After that, we classify them as standing, squatting and nothing.
Next, we need to extract the people in the image, and OpenCV can help us find the corresponding outline:
Cnts, _ = cv.findContours (img, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE)
This method is more or less suitable for the extraction of the maximum contour of the character, but unfortunately, the result is not stable. For example, the largest outline detected can only include a person's body, not his feet.
But anyway, it helps me to have a series of images. Usually we do squats in the same place, so we can assume that all the movements are carried out in a certain area and that the area is stable. To do this, we can iterate to build the bounding rectangle, and if necessary, add the bounding rectangle with the largest outline.
Here's an example:
The largest outline is red.
The outline boundary rectangle is blue
The boundary rectangle of the picture is green.
Through the above edge extraction and contour drawing, we can make sufficient preparations for further processing.
classification
Next we will extract the boundary rectangle from the image and convert it to a 64x64 square by size.
The following Mask is used as classifier input:
Standing posture:
Squat position:
Next we will use Keras and Tensorflow for classification.
Initially, we used the classic Lenet-5 model, which worked well. Then, after reading some articles about Lenet-5 variants, we decided to try to simplify the architecture.
It turns out that the precision of the simplified CNN is almost the same in the current example:
Model = Sequential ([Convolution2D (8, (5jue 5), activation='relu', input_shape=input_shape), MaxPooling2D (), Flatten (), Dense (512, activation='relu'), Dense (3, activation='softmax')]) model.compile (loss= "categorical_crossentropy", optimizer=SGD (lr=0.01), metrics= ["accuracy"])
The accuracy of 10 Epoch is 86%, the accuracy of 20 is 94%, and the accuracy of 30 is 96%. If the training is increasing, it may lead to a decline in accuracy caused by overfitting, so next we will apply this model to life.
Model application
We will run it on Raspberry.
Load the model:
With open (MODEL_JSON,'r') as f:model_data = f.read () model = tf.keras.models.model_from_json (model_data) model.load_weights (MODEL_H5) graph = tf.get_default_graph ()
And use this to classify the squatting Mask:
Img = cv.imread (path + f, cv.IMREAD_GRAYSCALE) img = np.reshape (img, [1 model.predict_classes 64 return [0] if c else None]) with graph.as_default (): C = model.predict_classes (img)
On Raspberry, a classification call entered as 64x64 takes about 60-70 milliseconds, almost in real time.
Finally, let's integrate all of the above into one application:
GET /-an application page (for more information below)
GET / status- gets the current status, number of squats and number of frames
POST / start-start the exercise
POST / stop-complete the activity
GET / stream-Video stream from the camera
The above is what the squat detector based on OpenCV and Tensorflow is like. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.