Dlib technology comes from the paper of some college Daniel. It uses 68 feature points such as eyes, facial contour, bridge of nose, eyebrows and mouth to represent facial features.
import cv2 import face_recognition as fr from PIL import Image, ImageDraw import numpy as np facial_features = ['chin', 'left_eyebrow', 'right_eyebrow', 'nose_bridge', 'nose_tip', 'left_eye', 'right_eye', 'top_lip', 'bottom_lip'] # Receive two data from the command line #imagePath = sys.argv[1] imagePath = "test.jpg" #cascPath = sys.argv[2] # Create the haar cascade faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml') #Using the camera cap = cv2.VideoCapture(0) while True: ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) face_locations = fr.face_locations(gray, number_of_times_to_upsample=1) # Face location face_landmarks_list = fr.face_landmarks(gray, face_locations) # Face feature point recognition #print(face_landmarks_list) for (top, right, bottom, left), face_landmarks in zip(face_locations, face_landmarks_list): # Depicting facial features for facial_feature in facial_features: for i in range(len(face_landmarks[facial_feature])-1): cv2.line(img, face_landmarks[facial_feature][i], face_landmarks[facial_feature][i+1], (255,255,255)) # Frame face cv2.rectangle(img, (left, top), (right, bottom), (255, 0, 0), 2) cv2.imshow("Faces found", img) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()
The number in waitKey(1) represents the invalid time before waiting for the key input, in milliseconds. In this time period, the key 'q' will not be recorded. After that, the key will be recorded and will play a role in the next if segment. That is to say, after the invalid time, it is detected whether the key 'q' is pressed in the last time period when the image is displayed. If not, it will jump out of the if statement section to capture and display the next image.
If this parameter is set to zero, it means that after capturing and displaying a frame of image, the program will stay in the if statement segment and wait for 'q' to be typed.
cv2.waitKey(1) and 0xFF (1111111) are combined because the return value of cv2.waitKey(1) is more than 8 bits, but only the last 8 bits are actually valid. In order to avoid production interference, the rest positions are 0 through the and operation.