Facial recognition script mod help

Hello, I am totally new to this. I have been following the Core videos and have a working tracking camera on my Pi5. Using the python script from CORE i was trying to modify to move a servo once it saw a face. I am able to move a servo on pin 14 in a separate script thru the PI, but the same does not work here… ANY suggestion is great.. I have been staring at this for hours… these are the sections i was trying to add to the CORE script Facial Recognition Hardware in place of an LED turning on.

from gpiozero import Servo
from time import sleep


# Initialize GPIO
myGPIO=14
servo = servo(myGPIO)

    # Control the GPIO pin based on face detection
        if authorized_face_detected:
           servo = Servo(myGPIO)
 
        while True:
            servo.mid()
            print("mid")
            sleep(2)
            servo.min()
            print("min")
            sleep(2)
            servo.mid()
            print("mid")
            sleep(2)
            servo.max()
            print("max")
            sleep(2) 
          
                  
        else:       
            sleep() # Turn off Pin
    
        return frame
3 Likes

Hey Chris.
Welcome to the forums.

Have you declared servo twice (Or is that just a formatting issue)?
I can’t quite tell form this excerpt but you may be declaring the servo object every time you detect a face. I’m not convinced this is the problem but It’s where I would start.

1 Like

Hey Chris,

Would you be able to share all the code you are using for this? It could be a formatting issue created when copying this code over to this forum post but it looks like your tab indentations may give python some trouble.

I second what Pixmusix has pointed out. Defining the ‘servo’ variable multiple times like you have is unusual behaviour.

at the end you return ‘frame’. This isn’t defined in the code section you have shared. If we could see the whole thing we may be able to get to the bottom of this!

Thank you, I am quite a novice for coding. I have copied the 2 files and added as zip file.
The first is the modified CORE facial recognitionprogram, goal there was see an authorized face, have servo do a sequence(expresssion). The second file is a servo file. I did run on the Pi 5 that controlled a servo, not perfectly but did work. I was trying to combine them and failed.
Welcome any input and thank you for the time!
Archive.zip (2.5 KB)

Hey @Chris296185,

Thanks for that code! It looks like you had it most of the way there but I think a few syntax errors would be causing you issues. I have included the modified code at the bottom of this post that I think will do what you need.

It looks like you used the line servo = Servo(myGPIO) twice. Once at the start of the file and once every time a frame is processed. This statement sets up a variable called ‘servo’ as a Servo object on GPIO pin 14 and should only need to be run once at the start of your code. Python code is case sensitive, and it’s worth noting that the capitalisation of Servo is important for the `Servo(myGPIO)’ part of this line.

Python uses indentation to determine what lines of code belong inside a given condition statement. For the section:

if authorized_face_detected:
           servo = Servo(myGPIO)
 
while True:
            servo.mid()

The code will only run the servo = Servo(myGPIO) line if a face is detected but will run the while section regardless of the state of authorized_face_detected. It also seems like a while loop may not be needed for this. I have changed this section to be this instead.

     if authorized_face_detected:
            print("Face Detected!!!")
            servo.mid()
            print("mid")
            sleep(2)
            servo.min()
            print("min")
            sleep(2)
            servo.mid()
            print("mid")
            sleep(2)
            servo.max()
            print("max")
            sleep(2) 

This code will run everything indented to the right of the if statement if a face is detected once before continuing on.

You also had this servo movement repeated in the main while loop of the code. This would cause the servo to move every time the camera records a new frame, regardless of the faces detected, which I don’t think is your intention here. I have removed that section.

Give this code a try and let me know how it goes! Programming can be really fun, and Python is a great entry point, but it does still have some rules, like indentation to follow for best results.

Hope this helps! :slight_smile:

facial_recognition_hardware_CEModified.zip (2.1 KB)

1 Like

Ahh thank you, I can see i was adding too much, too many places. I am able to get a Servo to run with the same sequence if executed alone. Within the Facial Recognition script, it still fails to execute. I have removed the servo directions and noted just a print text direction. That still fails to work when i run it…so i am missing something in executing. The facial recognition works, identifies me, camera appears to be working … The quest continues…at least this is fun!

1 Like

for anyone who may want it for servo control, built on the CORE LED control script.. This is a working version, i did not say a perfect version… but it does work…

import face_recognition
import cv2
import numpy as np
from picamera2 import Picamera2
import time
import pickle
from gpiozero import Servo
from time import sleep

# Load pre-trained face encodings
print("[INFO] Loading encodings...")
with open("encodings.pickle", "rb") as f:
    data = pickle.loads(f.read())
known_face_encodings = data["encodings"]
known_face_names = data["names"]

# Initialize the camera
print("[INFO] Initializing camera...")
picam2 = Picamera2()
picam2.configure(picam2.create_preview_configuration(main={"format": 'XRGB8888', "size": (640, 480)}))  # Reduced resolution
picam2.start()

# Initialize GPIO and Servo
myGPIO = 14
servo = Servo(myGPIO)
servo.detach()  # Servo starts in detached (off) state

# Processing variables
cv_scaler = 6
face_locations = []
face_encodings = []
face_names = []
frame_count = 0
start_time = time.time()
fps = 0
process_every_n_frames = 5

# Authorized users
authorized_names = ["THE NAME YOU TRAINED IT ON"]

def process_frame(frame):
    global face_locations, face_encodings, face_names

    resized_frame = cv2.resize(frame, (0, 0), fx=(1/cv_scaler), fy=(1/cv_scaler))
    rgb_resized_frame = cv2.cvtColor(resized_frame, cv2.COLOR_BGR2RGB)

    face_locations = face_recognition.face_locations(rgb_resized_frame, model='hog')  # Use faster model
    face_encodings = face_recognition.face_encodings(rgb_resized_frame, face_locations)

    face_names = []
    authorized_face_detected = False

    for face_encoding in face_encodings:
        matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
        name = "Unknown"

        face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
        best_match_index = np.argmin(face_distances)
        if matches[best_match_index]:
            name = known_face_names[best_match_index]
            if name in authorized_names:
                authorized_face_detected = True

        face_names.append(name)

    # Trigger servo if authorized face detected
    if authorized_face_detected:
        print("HI YOU!!!")
        servo.mid()
        sleep(0.5)
        servo.min()
        sleep(0.5)
        servo.max()
        sleep(0.5)
        servo.detach()
    else:
        print("NOT YOU!!!")
        servo.detach()

    return frame

def draw_results(frame):
    for (top, right, bottom, left), name in zip(face_locations, face_names):
        top *= cv_scaler
        right *= cv_scaler
        bottom *= cv_scaler
        left *= cv_scaler

        cv2.rectangle(frame, (left, top), (right, bottom), (244, 42, 3), 3)
        cv2.rectangle(frame, (left - 3, top - 35), (right + 3, top), (244, 42, 3), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, name, (left + 6, top - 6), font, 1.0, (255, 255, 255), 1)

        if name in authorized_names:
            cv2.putText(frame, "Authorized", (left + 6, bottom + 23), font, 0.6, (0, 255, 0), 1)

    return frame

def calculate_fps():
    global frame_count, start_time, fps
    frame_count += 1
    elapsed_time = time.time() - start_time
    if elapsed_time > 1:
        fps = frame_count / elapsed_time
        frame_count = 0
        start_time = time.time()
    return fps

# Main loop
print("[INFO] Running facial recognition. Press 'q' to quit.")
try:
    while True:
        frame = picam2.capture_array()

        if frame_count % process_every_n_frames == 0:
            processed_frame = process_frame(frame)
        else:
            processed_frame = frame

        display_frame = draw_results(processed_frame)
        current_fps = calculate_fps()

        cv2.putText(display_frame, f"FPS: {current_fps:.1f}", (display_frame.shape[1] - 150, 30),
                    cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)

        cv2.imshow('Video', display_frame)

        if cv2.waitKey(1) & 0xFF == ord("q"):
            break

except KeyboardInterrupt:
    print("\n[INFO] Exiting...")

# Cleanup
cv2.destroyAllWindows()
picam2.stop()
servo.detach()
2 Likes