In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article is about how to build a face recognition attendance system in Python. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Practical Application of face recognition
Face recognition is currently being used to make the world safer, smarter and more convenient.
There are several use cases:
Search for missing persons
Retail crime
Safety identification
Identify accounts on social media
Attendance system
Identify the driver in the car
Depending on performance and complexity, there are several ways to perform facial recognition.
Traditional face recognition algorithms:
In the 1990s, the holistic approach was applied to face recognition. Handmade local descriptors became popular in the early 1920s, and then local feature learning methods were adopted in the late 2000s. The algorithms that are widely used and implemented in OpenCV are as follows:
Eigenfaces (1991)
Local Binary Patterns Histograms (LBPH) (1996)
Fisherfaces (1997)
Scale Invariant Feature Transform (SIFT) (1999)
Speed Up Robust Features (SURF) (2006)
Each method follows a different method to extract image information and match it with the input image.
Fischer-faces and Eigenfaces have almost similar methods to SURF and SIFT.
LBPH is a simple but very effective method, but it is slower than modern face recognizer.
Compared with modern face recognition algorithms, these algorithms are not fast. Traditional algorithms cannot be trained only by taking a single picture of a person.
Face recognition in-depth learning:
Some widely used face recognition systems based on deep learning are as follows:
DeepFace
DeepID series of systems
VGGFace
FaceNet
Face recognizer generally finds important points in face images, such as corners of mouth, eyebrows, eyes, nose, lips and so on. The coordinates of these points are called facial features points, and there are 66 such points. In this way, different techniques for finding feature points give different results.
Source: https://www.pinterest.com/mrmosherart/face-landmarks/
The steps involved in the face recognition model:
1. Face detection: locate the face and draw a bounding box around the face and retain the coordinates of the bounding box.
two。 Face alignment: standardize faces to be consistent with the training database.
3. Feature extraction: extract facial features that will be used for training and recognition tasks.
4. Face recognition: match a human face with one or more known faces in the prepared database.
In the traditional face recognition method, we have a separate module to perform these four steps. In this article, you will see a library that combines these four steps in one step.
Steps to build a face recognition system installation Library
We need to install 2 libraries to achieve face recognition.
Dlib: dlib is a modern C++ toolkit that contains machine learning algorithms and tools for creating complex software in C++ to solve practical problems.
# installing dlib pip install dlib
Facial recognition: will face_recognition library, create and maintain Adam Geitgey, including dlib face recognition function.
# installing face recognitionpip install face recognition
Opencv is used for some image preprocessing.
# installing opencv pip install opencv Import Library
Now that you have downloaded all the important libraries, let's import them to build the system.
Import cv2import numpy as npimport face_recognition load Ima
After importing the library, you need to load the image.
The face_recognition library loads the image as BGR. In order to print the image, you should use OpenCV to convert it to RGB.
Imgelon_bgr = face_recognition.load_image_file ('elon.jpg') imgelon_rgb = cv2.cvtColor (imgelon_bgr,cv2.COLOR_BGR2RGB) cv2.imshow (' bgr', imgelon_bgr) cv2.imshow ('rgb', imgelon_rgb) cv2.waitKey (0)
As you can see, RGB looks natural, so you will always change the channel to RGB.
Find the face location and draw a bounding box
You need to draw a bounding box around the face to show whether a face has been detected.
Imgelon = face_recognition.load_image_file ('elon.jpg') imgelon = cv2.cvtColor (imgelon Cv2.COLOR_BGR2RGB) #-Finding face Location for drawing bounding boxes-face = face_recognition.face_locations (imgelon_rgb) [0] copy = imgelon.copy () #-Drawing the Rectangle-cv2.rectangle (copy, (face [3], face [0]), (face [1]) Face [2]), (255 copy', copy 0255), 2) cv2.imshow ('copy', copy) cv2.imshow (' elon',imgelon) cv2.waitKey (0)
Training images for face recognition
The library is made by automatically finding faces and processing only faces, so you don't need to crop faces from the picture.
Training:
At this stage, we convert the training image into some coding and use the person name of the image to store the coding.
Train_elon_encodings = face_recognition.face_encodings (imgelon) [0]
Test:
For testing, we load the image and convert it to coding, and then match the code with the stored code during training, which is based on finding the maximum similarity. When you find a code that matches the test image, you will get the name associated with the training code.
# lets test an imagetest = face_recognition.load_image_file ('elon_2.jpg') test = cv2.cvtColor (test, cv2.COLOR_BGR2RGB) test_encode = face_recognition.face_encodings (test) [0] print (face_recognition.compare_faces ([train_encode], test_encode))
Face_recognition.compare_faces, returns True if the people in the two images are the same, False otherwise.
Construction of face recognition system
Import the necessary libraries
Import cv2import face_recognitionimport osimport numpy as npfrom datetimeimport datetimeimport pickle
Define the folder path where the training image dataset will be stored
Path = 'student_images'
Note: for training, we only need to put the training image in the path directory, and the name of the image must be in person_name.jpg/jpeg format.
For example:
As you can see in my student_images path, there are 6 people. So our model can only identify these six people. You can add more pictures to this directory so that more people can recognize them.
Now create a list to store the person_name and the image array.
Iterate through all the image files that exist in the path directory, read the images, append an array of images to the image list, and append the file name to the classNames.
Images = [] classNames = [] mylist = os.listdir (path) for cl in mylist: curImg = cv2.imread (f'{path} / {cl}') images.append (curImg) classNames.append (os.path.splitext (cl) [0])
Create a function to encode all the training images and store them in a variable encoding_face_train.
Def findEncodings (images): encodeList = [] for img in images: img = cv2.cvtColor (img, cv2.COLOR_BGR2RGB) encoded_face = face_recognition.face_encodings (img) [0] encodeList.append (encoded_face) return encodeListencoded_face_train = findEncodings (images)
Create a function that will create an Attendance.csv file to store attendance time.
Note: here you need to manually create the Attendance.csv file and give the path in the function
Def markAttendance (name): with open ('Attendance.csv','r+') as f: myDataList = f.readlines () nameList = [] for line in myDataList: entry = line.split (' ') nameList.append (entry [0]) if name not in nameList: now = datetime.now () time = now.strftime ('% I time% MV% SV% p') date = now.strftime ('% dmi% BMI% Y') f.writelines (frankn {name}, {time}, {date}')
We first check to see if the attendee's name is already available in attenting. CSV.
If the participant's name is not available in attends.csv, we will write the participant's name at the time of the function call.
Read the webcam for real-time identification.
# take pictures from webcam cap = cv2.VideoCapture (0) while True: success, img = cap.read () imgS = cv2.resize (img, (0Power0), None, 0.25) imgS = cv2.cvtColor (imgS, cv2.COLOR_BGR2RGB) faces_in_frame = face_recognition.face_locations (imgS) encoded_faces = face_recognition.face_encodings (imgS, faces_in_frame) for encode_face, faceloc in zip (encoded_faces Faces_in_frame): matches = face_recognition.compare_faces (encoded_face_train, encode_face) faceDist = face_recognition.face_distance (encoded_face_train, encode_face) matchIndex = np.argmin (faceDist) print (matchIndex) if matches [matchIndex]: name = classNams [matchIndex] .upper (). Lower () y1memx2Ji y2 X1 = faceloc # since we scaled down by 4 times y1, x2memo4 times y1, x2memo4 cv2.putText (img,name, (x1mem6), cv2.FILLED) cv2.putText (img,name, (x1mem6) Y2-5), cv2.FONT_HERSHEY_COMPLEX,1, (255255255), 2) markAttendance (name) cv2.imshow ('webcam', img) if cv2.waitKey (1) & 0xFF = = ord (' q'): break
Only the image size of the identified part is adjusted to 1x4. The output frame will be the original size.
Resizing increases the number of frames per second.
Face_recognition.face_locations () is called on the resized image (imgS). For face bounding box coordinates must be multiplied by 4 to cover the output frame.
* * face_recognition.distance () * * returns the distance array of the test images, which contains all the images that exist in our training directory.
The index of the minimum face distance will be the matching face.
When we find a matching name, we call the markAttendance function.
Use * * cv2.rectangle () * * to draw a bounding box.
We use * * cv2.putText () * * to put the matching name on the output frame.
attendance report
Challenges faced by face recognition system
While it may seem easy to build facial recognition, it is not easy in real-world images taken without any restrictions. Several challenges facing facial recognition systems are as follows:
* * Lighting: * * it greatly changes the appearance of the face, and slight changes in lighting conditions are observed to have a significant impact on the results.
* * pose: * * the facial recognition system is highly sensitive to posture, and if the database is trained only on the front view, it may cause recognition errors or inability to recognize.
Facial expressions: different expressions of the same person are another important factor to consider. However, modern recognizers can easily handle it.
Low resolution: the training of the recognizer must be carried out on the image with good resolution, otherwise the model will not be able to extract features.
* * Aging: * with age, changes in the shape, lines and texture of faces are another challenge.
Thank you for reading! This is the end of the article on "how to build a face recognition attendance system in Python". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.