In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "how to achieve mask recognition in OpenCV". Many people will encounter such a dilemma in the operation of actual cases, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Yesterday I saw an open source project on GitHub, which uses deep learning to detect whether you are wearing a mask, which is quite fun, so I went to download the trained model and planned to run with OpenCV's dnn module. However, after forward propagation, the inference matrix prob is a Mat matrix of 1x5972x2, which is different from the reasoning results encountered before. after a variety of decoding attempts, the reasoning results have not been decoded correctly. And search on the Internet did not find relevant content, few netizens use OpenCV to run this model, basically using the framework of deep learning to run. This is very helpless, and now we can only put the model aside for a while and study how to decode its reasoning results at other times.
However, I still wanted to try to test whether I was wearing a mask, because I was curious, and then because I failed to decode using the pre-training model of the open source project, I was so angry that I didn't want to try one myself. Just do it, because I am not in-depth about the aspects of deep learning, so my idea is to use the dnn module of OpenCV for face detection and location, and then use the ml module of OpenCV to identify whether to wear a mask.
So the first step is to train the classifier we need. I choose the SVM classifier of ml module in OpenCV to train the mask recognition classifier. The code for the training section is as follows:
String positive_path = "D:\ opencv_c++\ opencv_tutorial\ data\ test\ positive\"; string negative_path = "D:\ opencv_c++\\ opencv_tutorial\ data\\ test\\ negative\"; vectorpositive_images _ str, negative_images_str; glob (positive_path, positive_images_str); glob (negative_path, negative_images_str); vectorpositive_images, negative_images; for (int I = 0; I
< positive_images_str.size(); i++) { Mat positive_image = imread(positive_images_str[i]); positive_images.push_back(positive_image); } for (int j = 0; j < negative_images_str.size(); j++) { Mat negative_image = imread(negative_images_str[j]); negative_images.push_back(negative_image); } string savePath = "face_mask_detection.xml"; trainSVM(positive_images, negative_images, savePath); 首先读取所有的训练图像,包含正样本(戴口罩)图像和负样本(不戴口罩)图像,然后分别将正负样本集打包成vector类型,传入训练函数trainSVM()中,这个函数定义在头文件 "face_mask.h" 中。 在训练过程中,我们不是把图像完全展开进行训练,而是通过特征提取,得到每个样本图像的HOG特征,再计算每个HOG特征的特征描述子,通过特征描述子来训练SVM分类器。 要注意的是,我们并不是对完整的样本图像进行HOG特征的提取与描述,而是对样本图像先进行人脸区域的提取,将提取出来的人脸区域图像再进行HOG特征提取与描述并进行训练。 同时,还需要对正负样本集进行标注,正样本标记为1,负样本标记为-1。 代码如下: for (int i = 0; i < positive_num; i++) { Mat positive_face; Rect positive_faceBox; if (faceDetected(positive_images[i], positive_face, positive_faceBox)) { resize(positive_face, positive_face, Size(64, 128)); Mat gray; cvtColor(positive_face, gray, COLOR_BGR2GRAY); vector descriptor; hog_train->Compute (gray, descriptor); train_descriptors.push_back (descriptor); labels.push_back (1);} for (int j = 0; j)
< negative_num; j++) { Mat negative_face; Rect negative_faceBox; if (faceDetected(negative_images[j], negative_face, negative_faceBox)) { resize(negative_face, negative_face, Size(64, 128)); Mat gray; cvtColor(negative_face, gray, COLOR_BGR2GRAY); vector descriptor; hog_train->Compute (gray, descriptor); train_descriptors.push_back (descriptor); labels.push_back (- 1);} int width = train_descriptors [0] .size (); int height = train_descriptors.size (); Mat train_data = Mat::zeros (Size (width, height), CV_32F); for (int r = 0; r
< height; r++) { for (int c = 0; c < width; c++) { train_data.at(r, c) = train_descriptors[r][c]; } } auto train_svm = ml::SVM::create(); train_svm->TrainAuto (train_data, ml::ROW_SAMPLE, labels); train_svm- > save (path); hog_train- > ~ HOGDescriptor (); train_svm- > clear ()
The function faceDetected () for face extraction is defined in the header file "face.h". Here we use the opencv_face_detector_uint8.pb face detection model.
Then, at this stage, the training of the SVM classifier to detect whether to wear a mask is realized, and the model files obtained are as follows:
Next, we will load the xml file and detect the input image. Where the function used for detection is FaceMaskDetect (), which is defined in the "face_mask.h" header file.
Auto detecModel = ml::SVM::load ("face_mask_detection.xml"); Mat test_image = imread ("D:/BaiduNetdiskDownload/ mask detection data set / val/test_00004577.jpg"); FaceMaskDetect (test_image, detecModel); imshow ("test_image", test_image)
At this point, we have realized the process from training to running and testing. Let's take a look at the effect of the operation:
First take a look at the image without a mask. If it is detected that you are not wearing a mask, the face is framed in red and marked with the word "Not Face Mask" in red:
If you are wearing a mask, frame your face in green and mark "Face Mask":
From the effect point of view, the test images used are not in the training set, and the success rate of photo recognition of a single face is still OK, but it is certainly not as accurate as the neural network model in the open source project. And when I train here, the positive and negative sample ratio is about 1:2, and the total sample set is more than 400 training images, which is nothing compared to the training set of more than 8,000 images in the open source project.
However, because the part of face detection does not deal with the occurrence of multiple faces in the same image, when there are multiple faces in one image, only the person with the highest face confidence will be tested with a mask, so this part needs to be further optimized.
Of course, it's not interesting to detect only one image. We can also combine the camera to achieve real-time detection. The demo code is as follows:
VideoCapture capture; capture.open (0); if (! capture.isOpened ()) {cout
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.