In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how to achieve real-time face frame in Qt". In daily operation, I believe that many people have doubts about how to achieve real-time face frame in Qt. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to achieve real-time face frame in Qt". Next, please follow the editor to study!
I. Preface
After face recognition, all face frames need to be drawn in real-time video. There are many options for recognizing other people's faces, one is to recognize the largest face, this scene is mainly used for face scanning access control, and the other is to recognize all faces. This scene is mainly used for face recognition cameras, that is, all faces in the picture are recognized and sent to the server. The data of the face frame are mainly four parameters. The position of the upper left corner and the lower right corner can also be said to be x, y, width, height, and some of them may do well with the tilt angle. this significance is not very great. The speed of face recognition is generally very fast, even if you use the opencv used in learning to do recognition. It is basically a millisecond response, and the main time-consuming operation is in the extraction of eigenvalues. Therefore, it is generally required to be able to respond to the picture drawing of 25-30 frames per second of each channel + the drawing of the face frame, of course, there may be multiple data of the face box.
Use Qt to draw a face frame, the core is a function, call the drawRect method of QPainter, pass in the area, if the sentry point, you can also set the border thickness and color, fillet angle, etc., note that the fillet angle is the use of drawRoundedRect instead of drawRoundRect, many people will make a mistake here. Recent projects have more and more requirements for human face frames. Previously, users were allowed to get pictures to draw. Recently, this function is directly built into video controls (video controls encapsulate a variety of kernel versions, such as ffmpeg, vlc, mpv, Haikang sdk, etc.), providing an interface that can set the thickness and color of the frame into the collection of human face frames. Users as long as their own algorithm analysis to get the face region set (the user is God, the user's needs are my needs), you can set it through the setFaceRects function. If you want to empty the face, just set the face box area set to empty. The overall testing speed is very fast and can be ignored. The real-time image drawn by QOPenGLWidget also supports the drawing of human face frame.
II. Functional features
The supported functions include face recognition, face comparison, face search, live detection and so on.
The online version also supports ID card, driver's license, driving license, bank card and other identification.
The online version of the protocol supports Baidu, absenteeism, and the offline version supports Baidu, which can be customized.
In addition to supporting X86 architecture, it also supports embedded linux such as contex-A9, raspberry pie and so on.
The execution of each function returns not only the result but also the execution time.
Multithreading, which controls the current processing type through type.
Support a single image to retrieve the most similar images.
Support to specify directory images to generate facial feature value files.
You can set the number of pictures waiting to be processed in the queue.
A success or failure signal is returned for each execution.
The return results of face search include the original image + maximum similarity map + similarity and so on.
Face comparison supports two pictures and two eigenvalues at the same time.
Related functions customize a set of protocols for client and server, which can interact through TCP communication.
The custom face recognition protocol is very suitable for the scene where several devices are requested by a server in the center.
Each module is a separate class with neat code and perfect comments.
Third, effect picture
4. The core code bool FFmpegWidget::eventFilter (QObject * watched, QEvent * event) {if (watched = = osdWidget & & event- > type () = = QEvent::Paint) {if (drawImage) {QPainter painter; painter.begin (osdWidget); painter.setRenderHints (QPainter::Antialiasing | QPainter::SmoothPixmapTransform); / / draw the border drawBorder (& painter) If (thread- > getIsInit ()) {/ / draw background picture drawImg (& painter, image); / / draw human face frame drawFace (& painter); / / draw tag drawOSD (& painter, osd1Visible, osd1FontSize, osd1Text, osd1Color, osd1Image, osd1Format, osd1Position) DrawOSD (& painter, osd2Visible, osd2FontSize, osd2Text, osd2Color, osd2Image, osd2Format, osd2Position);} else {/ / draw background if (! isDrag) {drawBg (& painter);}} painter.end ();}} return QWidget::eventFilter (watched, event) } void FFmpegWidget::drawBorder (QPainter * painter) {if (borderWidth = = 0) {return;} painter- > save (); QPen pen; pen.setWidth (borderWidth); pen.setColor (hasFocus ()? FocusColor: borderColor); painter- > setPen (pen); painter- > drawRect (rect ()); painter- > restore ();} void FFmpegWidget::drawBg (QPainter * painter) {painter- > save (); / / draw text if the background picture is empty, otherwise draw the background picture if (bgImage.isNull ()) {painter- > setFont (this- > font ()); painter- > setPen (palette (). Foreground (). Color ()) Painter- > drawText (rect (), Qt::AlignCenter, bgText);} else {/ / Center int x = rect (). Center (). X ()-bgImage.width () / 2; int y = rect (). Center (). Y ()-bgImage.height () / 2; QPoint point (x, y); painter- > drawImage (point, bgImage);} painter- > restore () } void FFmpegWidget::drawImg (QPainter * painter, QImage img) {if (img.isNull ()) {return;} painter- > save (); int offset = borderWidth * 1 + 0; img = img.scaled (width ()-offset, height ()-offset, Qt::KeepAspectRatio, Qt::SmoothTransformation); if (fillImage) {QRect rect (offset / 2, offset / 2, width ()-offset, height ()-offset) Painter- > drawImage (rect, img);} else {/ / automatically center int x = rect (). Center (). X ()-img.width () / 2; int y = rect (). Center (). Y ()-img.height () / 2; QPoint point (x, y); painter- > drawImage (point, img);} painter- > restore () } void FFmpegWidget::drawFace (QPainter * painter) {if (faceRects.count () = = 0) {return;} painter- > save (); / / the color of the face frame QPen pen; pen.setWidth (faceBorder); pen.setColor (faceColor); painter- > setPen (pen); / / take out the face frame area one by one to draw foreach (QRect rect, faceRects) {painter- > drawRect (rect) } painter- > restore () } void FFmpegWidget::drawOSD (QPainter * painter, bool osdVisible, int osdFontSize, const QString & osdText, const QColor & osdColor, const QImage & osdImage, const FFmpegWidget::OSDFormat & osdFormat) Const FFmpegWidget::OSDPosition & osdPosition) {if (! osdVisible) {return } painter- > save (); / / offset the tag position as much as possible to avoid occlusion of QRect osdRect (rect (). X () + (borderWidth * 2), rect (). Y () + (borderWidth * 2), width ()-(borderWidth * 5), height ()-(borderWidth * 5); int flag = Qt::AlignLeft | Qt::AlignTop; QPoint point = QPoint (osdRect.x (), osdRect.y ()) If (osdPosition = = OSDPosition_Left_Top) {flag = Qt::AlignLeft | Qt::AlignTop; point = QPoint (osdRect.x (), osdRect.y ());} else if (osdPosition = = OSDPosition_Left_Bottom) {flag = Qt::AlignLeft | Qt::AlignBottom; point = QPoint (osdRect.x (), osdRect.height ()-osdImage.height ()) } else if (osdPosition = = OSDPosition_Right_Top) {flag = Qt::AlignRight | Qt::AlignTop; point = QPoint (osdRect.width ()-osdImage.width (), osdRect.y ());} else if (osdPosition = = OSDPosition_Right_Bottom) {flag = Qt::AlignRight | Qt::AlignBottom; point = QPoint (osdRect.width ()-osdImage.width (), osdRect.height ()-osdImage.height ()) } if (osdFormat = = OSDFormat_Image) {painter- > drawImage (point, osdImage);} else {QDateTime now = QDateTime::currentDateTime (); QString text = osdText; if (osdFormat = = OSDFormat_Date) {text = now.toString ("yyyy-MM-dd");} else if (osdFormat = = OSDFormat_Time) {text = now.toString ("HH:mm:ss") } else if (osdFormat = = OSDFormat_DateTime) {text = now.toString ("yyyy-MM-dd HH:mm:ss");} / / set color and font size QFont font; font.setPixelSize (osdFontSize); painter- > setPen (osdColor); painter- > setFont (font); painter- > drawText (osdRect, flag, text);} painter- > restore () At this point, the study on "how to realize real-time human face frame in Qt" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.