In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to achieve face recognition smile detection by Python". In daily operation, I believe many people have doubts about how to achieve face recognition smile detection by Python. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to achieve face recognition smile detection by Python". Next, please follow the editor to study!
one。 Experimental preparation
Environment building
Pip install tensorflow==1.2.0pip install keras==2.0.6pip install dlib==19.6.1pip install h6py==2.10
If you are creating a new virtual environment, you also need to install the following packages
Pip install opencv_python==4.1.2.30pip install pillowpip install matplotlibpip install h6py
Using genki-4k datasets
Can be downloaded from here
two。 Image preprocessing
Open the dataset
We need to detect the face and crop the picture.
The code is as follows:
Import dlib # face recognition Library dlibimport numpy as np # data processing Library numpyimport cv2 # dlib predictor detector = dlib.get_frontal_face_detector () predictor = dlib.shape_predictor ('D:\\ shape_predictor_68_face_landmarks.dat') # path to read image path_read = "C:\\ Users\\ 28205\ Documents\\ Tencent Files\\ 2820535964\\ FileRecv\ genki4k\\ files "num=0for file_name in os.listdir (path_read): # aa is the full path of the picture aa= (path_read +" / "+ file_name) # the path of the image read in contains non-English img=cv2.imdecode (np.fromfile (aa) Dtype=np.uint8), cv2.IMREAD_UNCHANGED) # get the width and height of the image img_shape=img.shape img_height=img_shape [0] img_width=img_shape [1] # the path used to store the generated single face path_save= "C:\\ Users\\ 28205\ Documents\\ Tencent Files\ 2820535964\ FileRecv\\ genki4k\ files1" # dlib Detection dets = detector (img,1) print ("number of faces:" Len (dets)) for k, d in enumerate (dets): if len (dets) > 1: continue num=num+1 # calculates the rectangle size # (xdepartment y), (width width, height height) pos_start = tuple ([d.left (), d.top ()]) pos_end = tuple ([d.right ()) D.bottom ()]) # calculate the size of the rectangle height = d.bottom ()-d.top () width = d.right ()-d.left () # generate an empty image img_blank = np.zeros ((height, width, 3) based on the face size Np.uint8) for i in range (height): if d.top () + I > = img_height:# prevent crossing boundaries continue for j in range (width): if d.left () + j > = img_width:# preventing crossing boundaries continue img_ bank [I] [j] = img [d.top () + I ] [d.left () + j] img_blank = cv2.resize (img_blank (200,200), interpolation=cv2.INTER_CUBIC) cv2.imencode ('.jpg', img_blank) [1] .tofile (path_save+ "\" + "file" + str (num) + ".jpg") # correct method
The running effect is as follows:
The consensus is not to produce 3878 pictures.
Some pictures do not recognize faces, so they are not cropped and saved, so you can add your own pictures.
three。 Partition data set
Code:
Import os, shutil# original dataset path original_dataset_dir ='C:\\ Users\\ 28205\ Documents\\ Tencent Files\\ 2820535964\ FileRecv\\ genki4k\\ files1'# New dataset base_dir ='C:\\ Users\\ 28205\ Documents\ Tencent Files\ 2820535964\ FileRecv\ genki4k\ files2'os.mkdir (base_dir) # directory of training images, verification images, test images train_dir = os.path.join (base_dir 'train') os.mkdir (train_dir) validation_dir = os.path.join (base_dir,' validation') os.mkdir (validation_dir) test_dir = os.path.join (base_dir, 'test') os.mkdir (test_dir) train_cats_dir = os.path.join (train_dir,' smile') os.mkdir (train_cats_dir) train_dogs_dir = os.path.join (train_dir 'unsmile') os.mkdir (train_dogs_dir) validation_cats_dir = os.path.join (validation_dir,' smile') os.mkdir (validation_cats_dir) validation_dogs_dir = os.path.join (validation_dir, 'unsmile') os.mkdir (validation_dogs_dir) test_cats_dir = os.path.join (test_dir,' smile') os.mkdir (test_cats_dir) test_dogs_dir = os.path.join (test_dir 'unsmile') os.mkdir (test_dogs_dir) # copy 1000 smiley faces to train_c_dirfnames = [' file {} .jpg '.format (I) for i in range (1900)] for fname in fnames: src = os.path.join (original_dataset_dir, fname) dst = os.path.join (train_cats_dir, fname) shutil.copyfile (src, dst) fnames = [' file {} .jpg '.format (I) for i in range For fname in fnames: src = os.path.join (original_dataset_dir, fname) dst = os.path.join (validation_cats_dir, fname) shutil.copyfile (src, dst) # Copy next 1350 cat images to test_cats_dirfnames = ['file {} .jpg' .format (I) for i in range (1350, 1800)] for fname in fnames: src = os.path.join (original_dataset_dir, fname) dst = os.path.join (test_cats_dir) Fname) shutil.copyfile (src, dst) fnames = ['file {} .jpg' .format (I) for i in range (2127 fname 3000)] for fname in fnames: src = os.path.join (original_dataset_dir, fname) dst = os.path.join (train_dogs_dir, fname) shutil.copyfile (src) Dst) # Copy next 500 dog images to validation_dogs_dirfnames = ['file {} .jpg' .format (I) for i in range (3000 jpg)] for fname in fnames: src = os.path.join (original_dataset_dir, fname) dst = os.path.join (validation_dogs_dir, fname) shutil.copyfile (src Dst) # Copy next 500 dog images to test_dogs_dirfnames = ['file {} .jpg' .format (I) for i in range (3000 jpg)] for fname in fnames: src = os.path.join (original_dataset_dir, fname) dst = os.path.join (test_dogs_dir, fname) shutil.copyfile (src, dst)
The running effect is as follows:
Four. CNN extract face recognition smiley face and non-smiley face 1. Create a model
Code:
# create model from keras import layersfrom keras import modelsmodel = models.Sequential () model.add (layers.Conv2D (32, (3,3), activation='relu',input_shape= (150,150,3)) model.add (layers.MaxPooling2D ((2,2) model.add (layers.Conv2D (64, (3,3), activation='relu')) model.add (layers.MaxPooling2D ((2,2) model.add (layers.Conv2D (128,( 3,3)) Activation='relu')) model.add (layers.MaxPooling2D ((2,2)) model.add (layers.Conv2D (128,3,3, activation='relu')) model.add (layers.MaxPooling2D ((2,2)) model.add (layers.Flatten ()) model.add (layers.Dense (512, activation='relu')) model.add (layers.Dense (1, activation='sigmoid')) model.summary () # View
Running effect:
two。 Normalization treatment
Code:
# Normalized from keras import optimizersmodel.compile (loss='binary_crossentropy', optimizer=optimizers.RMSprop (lr=1e-4), metrics= ['acc']) from keras.preprocessing.image import ImageDataGeneratortrain_datagen = ImageDataGenerator (rescale=1./255) validation_datagen=ImageDataGenerator (rescale=1./255) test_datagen = ImageDataGenerator (rescale=1./255) train_generator = train_datagen.flow_from_directory (# destination file directory train_dir # the size of all pictures must be 150x150 target_size= (150150), batch_size=20, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') validation_generator = test_datagen.flow_from_directory (validation_dir, target_size= (150150), batch_size=20 Class_mode='binary') test_generator = test_datagen.flow_from_directory (test_dir, target_size= (150,150), batch_size=20, class_mode='binary') for data_batch Labels_batch in train_generator: print ('data batch shape:', data_batch.shape) print (' labels batch shape:', labels_batch) break#'smile': 0, 'unsmile': 13. Data enhancement
Code:
# data enhancement datagen = ImageDataGenerator (rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') # Image changes after data enhancement import matplotlib.pyplot as plt# This is module with image preprocessing utilitiesfrom keras.preprocessing import imagefnames = [os.path.join (train_smile_dir) Fname) for fname in os.listdir (train_smile_dir)] img_path = fnames [3] img = image.load_img (img_path, target_size= (150150)) x = image.img_to_array (img) x = x.reshape ((1,) + x.shape) I = 0for batch in datagen.flow (x Batch_size=1): plt.figure (I) imgplot = plt.imshow (image.array_to_img (batch [0])) I + = 1 if I% 4 = 0: breakplt.show ()
Running effect:
4. Create a network
Code:
# create network model = models.Sequential () model.add (layers.Conv2D (32, (3,3), activation='relu',input_shape= (150,150,3)) model.add (layers.MaxPooling2D ((2,2) model.add (layers.Conv2D (64, (3,3), activation='relu')) model.add (layers.MaxPooling2D ((2,2) model.add (layers.Conv2D (128,( 3,3)) Activation='relu')) model.add (layers.MaxPooling2D ((2,2)) model.add (layers.Conv2D (128,3,3) activation='relu') model.add (layers.MaxPooling2D ((2,2)) model.add (layers.Flatten ()) model.add (layers.Dropout (0.5) model.add (layers.Dense (512, activation='relu')) model.add (layers.Dense (1, activation='sigmoid')) model.compile (loss='binary_crossentropy' Optimizer=optimizers.RMSprop (lr=1e-4), metrics= ['acc']) # Normalized processing train_datagen = ImageDataGenerator (rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True ) test_datagen = ImageDataGenerator (rescale=1./255) train_generator = train_datagen.flow_from_directory (# This is the target directory train_dir, # All images will be resized to 150x150 target_size= (150,150), batch_size=32, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') validation_generator = test_datagen.flow_from_directory (validation_dir Target_size= (150,150), batch_size=32, class_mode='binary') history = model.fit_generator (train_generator, steps_per_epoch=100, epochs=60, validation_data=validation_generator) Validation_steps=50) model.save ('smileAndUnsmile1.h6') # graphic acc = history.history [' acc'] val_acc = history.history ['val_acc'] loss = history.history [' loss'] val_loss = history.history ['val_loss'] epochs = range (len (acc)) plt.plot (epochs, acc,' bo', label='Training acc') plt.plot (epochs, val_acc) 'baked, label='Validation acc') plt.title (' Training and validation accuracy') plt.legend () plt.figure () plt.plot (epochs, loss, 'bo', label='Training loss') plt.plot (epochs, val_loss,' baked, label='Validation loss') plt.title ('Training and validation loss') plt.legend () plt.show ()
Running result:
The speed is slow, and it will take a long time.
5. Single picture test
Code:
# import cv2from keras.preprocessing import imagefrom keras.models import load_modelimport numpy as np# load model model = load_model ('smileAndUnsmile1.h6') # Local image path img_path='test.jpg'img = image.load_img (img_path, target_size= (150,150)) img_tensor = image.img_to_array (img) / 255.0img_tensor = np.expand_dims (img_tensor) Axis=0) prediction = model.predict (img_tensor) print (prediction) if prediction [0] [0] > 0.5: result=' non-smiling face 'else: result=' smiley face' print (result)
Running result:
6. Camera real-time test
Code:
# detect faces in video or camera import cv2from keras.preprocessing import imagefrom keras.models import load_modelimport numpy as npimport dlibfrom PIL import Imagemodel = load_model ('smileAndUnsmile1.h6') detector = dlib.get_frontal_face_detector () video=cv2.VideoCapture (0) font = cv2.FONT_HERSHEY_SIMPLEXdef rec (img): gray=cv2.cvtColor (img,cv2.COLOR_BGR2GRAY) dets=detector (gray 1) if dets is not None: for face in dets: left=face.left () top=face.top () right=face.right () bottom=face.bottom () cv2.rectangle (img, (left,top), (right,bottom), (0LQ 255), 2) img1=cv2.resize (img [top:bottom,left:right], dsize= (150150)) img1=cv2.cvtColor (img1) Cv2.COLOR_BGR2RGB) img1 = np.array (img1) / 255. Img_tensor = img1.reshape prediction = model.predict (img_tensor) if prediction [0] [0] > 0.5: result='unsmile' else: result='smile' cv2.putText (img, result, (left,top), font, 2, (0,255,0), 2, cv2.LINE_AA) cv2.imshow ('Video') Img) while video.isOpened (): res, img_rd = video.read () if not res: break rec (img_rd) if cv2.waitKey (1) & 0xFF = = ord ('q'): breakvideo.release () cv2.destroyAllWindows ()
Running result:
Fifth, Dlib extracts facial features to recognize smiling faces and non-smiling faces.
Code:
Import cv2 # Image processing Library OpenCvimport dlib # face recognition Library dlibimport numpy as np # data processing Library numpyclass face_emotion (): def _ _ init__ (self): self.detector = dlib.get_frontal_face_detector () self.predictor = dlib.shape_predictor ("shape_predictor_68_face_landmarks .dat ") self.cap = cv2.VideoCapture (0) self.cap.set (3 Self.cnt = 0 def learning_face (self): line_brow_x = [] line_brow_y = [] while (self.cap.isOpened ()): flag, im_rd = self.cap.read () k = cv2.waitKey (1) # take grayscale img_gray = cv2.cvtColor (im_rd) Cv2.COLOR_RGB2GRAY) faces = self.detector (img_gray 0) font = cv2.FONT_HERSHEY_SIMPLEX # if face if (len (faces)! = 0) is detected: # 68 feature points are marked for each face for i in range (len (faces)): for k D in enumerate (faces): cv2.rectangle (im_rd, (d.left (), d.top ()), (d.right (), d.bottom ()), (0L0255)) self.face_width = d.right ()-d.left () shape = self.predictor (im_rd) D) mouth_width = (shape.part (54). X-shape.part (48) .x) / self.face_width mouth_height = (shape.part (66). Y-shape.part (62) .y) / self.face_width brow_sum = 0 frown_sum = 0 For j in range (17 21): brow_sum + = (shape.part (j). Y-d.top ()) + (shape.part (j + 5). Y-d.top ()) frown_sum + = shape.part (j + 5). X-shape.part (j). X line_brow_x.append (shape.part (j) .x) line_brow_y.append (shape.part (j) .y) tempx = np.array (line_brow_x) tempy = np.array (line_brow_y) Z1 = np.polyfit (tempx) Tempy, 1) self.brow_k =-round (Z1 [0]) 3) brow_height = (brow_sum / 10) / self.face_width # eyebrow height ratio brow_width = (frown_sum / 5) / self.face_width # eyebrow distance ratio eye_sum = (shape.part (41) .y-shape.part (37). Y + shape.part (40). Y-shape.part (38). Y + shape.part (47). Y-shape.part (43). Y + shape.part (46). Y-shape.part (44) .y) eye_hight = (eye_sum / 4) / self.face_width if round (mouth_height > = 0.03) and eye_hight
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.