Try this
sudo pip3 install --no-cache-dir --upgrade face_recognition
Try this
sudo pip3 install --no-cache-dir --upgrade face_recognition
What if I’m running Debian 12??
Hi Tim!
Can you help me set up the code? I just want to move my servo with just my recognized face. The thing is when i put the code the same way as yours, the servo also moves while detecting the unknown persons face. I hope you can hlp me with this, since i also need to do solenoid with my other project.
hi,
when i try pip install face-recognition --no-cache-dir i get the following:
Command “/usr/bin/python -m pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-IBj2Nz --no-warn-script-location --no-binary :none: --only-binary :none: -i Simple index --extra-index-url piwheels - Simple index – setuptools wheel cmake” failed with error code 2 in None
(above it is a lot of other text but i can put a maximum of 10 links)
pls help me this is for my final school project
Hi Mali.
Welcome
Would you please clarify what version of pip you’re running?
You can find that with this command : pip --version
hi, I think I eventually was able to download it with some method I found online, but now om encountering another problem. When trying to run facial_req i get the following: mali@raspberrypi:~/facial_recognition $ python facial_req.py
[INFO] loading encodings + face detector…
VIDEOIO ERROR: V4L: index 2 is not correct!
Traceback (most recent call last):
File “facial_req.py”, line 38, in
frame = imutils.resize(frame, width=500)
File “/usr/local/lib/python3.7/dist-packages/imutils/convenience.py”, line 69, in resize
(h, w) = image.shape[:2]
AttributeError: ‘NoneType’ object has no attribute ‘shape’
could you please help me with this
nvm it works I just had to change src 2 to src 0 in line 26, great guide!!!
Hi
I am a beginner on this site and also in programming
I hope I can find some help on your site.
I installed OpenCv on my Raspberry Pi4 4gb. I wrote a simple script with python allowing the detection and recording of video captured with an Avidsen camera (my network consists of Raspberry connected on box in Ethernet, camera in wifi on box also). The script works well (reading, detection and recording). However, I am confronted with a problem that I do not understand, I am missing images and the progression over time is not consistent.
For example, on a simple test where I present my hand in front of the camera which will count down the time with my fingers (5 seconds), the detection is done correctly. However, when I watch the film I see the finger representing the first second, then very quickly my hand which withdraws from the camera field without seeing the other seconds (sometimes but rarely I see the 3rd second).
For information, when this script is used on my Windows computer environment, the recording is consistent in time and images. May be the problem is du to the Raspberry or version OpenCv with this configuration.
My Os on raspberry
PRETTY_NAME=“Raspbian GNU/Linux 11 (bullseye)”
NAME=“Raspbian GNU/Linux”
VERSION_ID=“11”
VERSION=“11 (bullseye)”
VERSION_CODENAME=bullseye
ID=raspbian
ID_LIKE=debian
My python version :Python 3.9.2
my OpenCv version with my Raspbian platform when i use pkg-config
libopencv-calib3d4.5:armhf 4.5.1+dfsg-5
libopencv-contrib4.5:armhf 4.5.1+dfsg-5
libopencv-core4.5:armhf 4.5.1+dfsg-5
libopencv-dnn4.5:armhf 4.5.1+dfsg-5
libopencv-features2d4.5:armhf 4.5.1+dfsg-5
libopencv-flann4.5:armhf 4.5.1+dfsg-5
libopencv-highgui4.5:armhf 4.5.1+dfsg-5
libopencv-imgcodecs4.5:armhf 4.5.1+dfsg-5
libopencv-imgproc4.5:armhf 4.5.1+dfsg-5
libopencv-ml4.5:armhf 4.5.1+dfsg-5
libopencv-objdetect4.5:armhf 4.5.1+dfsg-5
libopencv-photo4.5:armhf 4.5.1+dfsg-5
libopencv-shape4.5:armhf 4.5.1+dfsg-5
libopencv-stitching4.5:armhf 4.5.1+dfsg-5
libopencv-videoio4.5:armhf 4.5.1+dfsg-5
My source code
import cv2
import time
import datetime
import paramiko
ssh_file="/home/*****/****/*******"
url_cam="rtsp://****:****@192.168.***.***:554/mode=real&idc=1&ids=1"
remote_path='/home/****/*****/******/Images/'
host = "192.168.***.***"
user = "*****"
pswd = "*****"
ssh_client = paramiko.SSHClient()
ssh_client.connect(hostname=str(host),username =str(user),password=str(pswd))
cap = cv2.VideoCapture(url_cam)
fps = cap.get(cv2.CAP_PROP_FPS)
print(fps)
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) # float
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) # float
mog = cv2.createBackgroundSubtractorMOG2()
detection = False
detection_stopped_time = None
timer_started = False
SECONDS_TO_RECORD_AFTER_DETECTION = 5
frame_size = (int(cap.get(3)), int(cap.get(4)))
fourcc = cv2.VideoWriter_fourcc(*'XVID')
#fourcc = cv2.VideoWriter_fourcc(*'MPV4')
while True:
cv2.namedWindow("Camera", cv2.WINDOW_NORMAL)
cv2.resizeWindow("Camera", 1000, 500)
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
fgmask = mog.apply(gray)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5))
fgmask = cv2.erode(fgmask, kernel, iterations=1)
fgmask = cv2.dilate(fgmask, kernel, iterations=1)
contours, hierarchy = cv2.findContours(fgmask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 100:
if detection:
timer_started = False
else:
detection = True
current_time = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S-%Y-%m-%d")
out = cv2.VideoWriter(
f"{current_time}.avi", fourcc, fps, (int(width), int(height)))
print("Started Recording!")
print(len(contours))
sftp_client = ssh_client.open_sftp()
elif detection:
if timer_started:
if time.time() - detection_stopped_time >= SECONDS_TO_RECORD_AFTER_DETECTION:
print("boucle de fermeture")
print(len(contours))
detection = False
timer_started = False
out.release()
print('Stop Recording!')
local_file_path = f"{current_time}.avi"
remote_file_path = f"{remote_path}{current_time}.avi"
sftp_client.put(local_file_path, remote_file_path)
sftp_client.close()
else:
timer_started = True
detection_stopped_time = time.time()
print("boucle ouverture en cours ")
if detection:
out.write(frame)
for contour in contours:
if cv2.contourArea(contour) < 1000:
continue
x, y, w, h = cv2.boundingRect(contour)
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("Camera", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
ssh_client.close()
Do you have any idea for this problem?
I am a novice to this particular topic…
Thank you for your feedback and happy new year 2024
Francois
Hi @Fr264583
Good to have you with us
Just wanting to make sure I understand your problem.
Are you saying that, while running your code you have a good result. However it’s the recording output file from your cv2.VideoWriter_fourcc()
object that is missing data?
Did I get that right?
Pix
Hi
yes that true , there are many images missing into the film, Regardless of the chosen format ( avi, mp4 …)
You will find here a video with this problem.
2024-01-20-10-24-39-2024-01-20.zip (1.0 MB)
Thank you for your feedback
François
Cool. We’re on the same page.
5 Seconds happens to be whast you’ve set this variable to.
SECONDS_TO_RECORD_AFTER_DETECTION = 5
This variable is used in this if statement and it closes the client if True
if time.time() - detection_stopped_time >= SECONDS_TO_RECORD_AFTER_DETECTION:
# Code ommited for clarity
sftp_client.close()
Maybe this is the problem? if time.time() - detection_stopped_time
ever goes negative you would get about 5 seconds of footage every time no matter what.
Are there mechanism in your code to update detection_stopped_time such that it’s always greater than or equal to time.time().
Pix
Hi Pixmusic
The line regarding SECONDS_TO_RECORD_AFTER_DETECTION = 5
is to keep the camera open after len(contours)<100 but I nevertheless tried a simple test by increasing this variable SECONDS_TO_RECORD_AFTER_DETECTION = 60
.
Unfortunately the result is identical .
I think my problem is somewhere else
Regards
François
When i use the same program included into my windows computer and spyder the result is correct, I see the 5sec counted by my fingers + the 5 sec after. Unfortunately it not possible to put the file on this forum because is limited on 8 MB my file is 10 MB.
Regards
François
Ahhh Gotcha.
Measuring the size of your list of contours is smart.
Have you tried swapping the encoder?
For example does mp4 work?
fourcc = cv2.VideoWriter_fourcc(*'X264')
voObj = cv2.VideoWriter(f"{current_time}.avi", fourcc, fps, (int(width), int(height)))
Hi
I compressed the video captured to decrease the volume to share it you.
This video was created by the same program python used in raspberry that I included into windows
computer.
Video with windows 2024_01_22_10_00_39.zip (7.5 MB)
You will notice a complete and coherent film over time unlike the film captured by the Raspberry
Do you have any idea regarding the problem on Raspberry?
Regards
François
Nice testing François!
… I’m sorry but I’d be guessing if I said I knew what was wrong.
However I can tell you what I’d try.
I’d load the program up whilst running Raspberry Pi Monitor, htop, or your favorite equivalent, to ensure you’re not pushing the PI too hard? I personally doubt it, but that’s the next thing I’d check.
Hi Pixmusic
Great remark because I use my raspberry for others things ( Jeedom, gateway for my server disk, printer gateway)
You will find here Htop screenshot during recording phase
That’s super cool; htop is Ⴆαҽ but htop isn’t clingy. The swp is chillin’. Memory looking happy.
I think I see something that might be a problem and is worth further investigation.
I apologize in advance if this is stuff your already familiar with. I just don’t know how much you know… ya know?
These bars below caught my eye.
The core is the part of your computer that runs the fetch–decode–execute cycle. It’s the thinky part.
The bars labeled 0,1,2,3 are the four cores of your Pis quad-core ARM Cortex-A72.
Long story short cores stopped getting faster so we “chucked a post war Europe” and convinced everyone to have quadruplets.
The Pi is very clever, it will notice that you have many programs are running at the same time and divvy up the work between the four cores. However! Each one of your cores individually isn’t that fast relative to say the intel i7 I’m writing this on.
Here is the kicker, for reasons I go into a little here, python is locked onto one core at a time.
i.e. the PI can see python want’s to spin up lots of threads and do many thing, but it has to do ALL those things on a single core.
Below you can see how much CPU in percentage the python interpreter wants.
And her you can see core[1] being an absolute champion.
The PI isn’t even maxed out, the other three cores have room to spare.
However, the interpreter prevents “shared state”.
I think, based on what I see here, that your Raspberry PI is being overloaded.
I’m not saying this is definitely the cause of your problems but you should explore further.
I think you should run a super simple test.
What happens if you restart the py and do nothing else but run the python script?
Does it run? What does htop show?
Pix
Hello Tim,
I’m struggling with the last code. It means headshots_picam.py and train_model.py work very well.
Facial_req.py doesn’t because of vs = VideoStream(usePiCamera=True).start()
I have a Raspberry Pi 4 Model B with 4GB and Bookworm(12) OS.
I read everything here (I think) but didn’t solve the error: ModuleNotFoundError: No module named ‘picamera’
I also tried vs = VideoStream(src=0).start()
Camera works as I use it in other script/examples.
Could you help me, please?
Ovidiu
Can you help me please?
Thank you Pixmusic for your clear explanations.
there are a lot of things that I don’t know precisely like python, linux and how raspberry works so any new information interests me.
It’s a good idea to stop into my Raspberry the current tasks and test my script alone. I am thinking of making a new SD card without any specific application, it will be simpler for this test. I will inform you as soon as it is in place and tested. Tomorrow I think.
Regards
François