I have recently got started with Raspberry Pi and am still quite new to it. I have played with your open cv face tracking and object recognition. For a project I would like to be able to track an object (specifically a ball) using a similar method to the face tracking. I am unsure how to implement this and thought you might be able to help!
If you utilise that you will be able to adjust the pan-tilt script I have here and adapt if for your use Then you will be pan-tilt tracking a ball all day.
Hi, incredible guide. I learnt a lot. Newbie question if I may. Is there any way to autocenter pimoroni hat when the program is initiated. All the time the hat is started looking at the ceiling.
Many thanks! Keep up the good work!
There are some lines early in the code that should do what you’re after:
# Default Pan/Tilt for the camera in degrees. I have set it up to roughly point at my face location when it starts the code.
# Camera range is from 0 to 180. Alter the values below to determine the starting point for your pan and tilt.
cam_pan = 40
cam_tilt = 20
Have you changed these to see if you can get it doing what you’re after? Hope this works!
Thank you so much!
It worked beautifully!
May I ask some help? I am trying to make tracking of an object, but I have been investigating your other tutorial about object classification. The issue is I don´t know how to extract the x,y,z,h info from the detected object in : result, objectInfo = getObjects(img, 0.45, 0.2,objects=[‘bird’]), so I can reuse your tracking function that works so well with the pan tilt pimoroni.
If you could kindly refer me to a function to extract this info, it would be much appreciated.
Hi Tim, slowly learning my way around the raspberry pi and have had some good fun playing with your object detection tutorial and this face tracking one. I am currently trying to combine the two aforementioned codes in order to do some object tracking but I am not having much luck. Is there an easy way to do this?
I hope I’m posting this question in the right place. I’ve been working on wearable camera tracking system for sport persons (like Osmo Pocket), which is a challenging environment. I have used OpenCV’s MOSSE algorithm with a Pi 3A+ for minimum processor and battery size/weight. It performed reasonably at 640x480 with a processing speed of 100fps, but struggles to track when the person targeted or camera wearer moves too quickly. Tracking performance is useless when frame size is increased to 1024x720 or the video is recorded.
The FOV and quality of 360° video cameras do not meet my requirements.
Do you have any suggestions for the most efficient CV library/code for detection and tracking, where the helmeted sports person and camera wearer are both moving at a range of 3-10m from each other?
I’d say something from OpenCV would be some of the fastest openly available tracking software, anything beyond this would be optimised for a specific product like the DJI range of tracking cameras, if you are running on the Pi 3 a bit more processing power might help, I’d check out the Oak-D Lite, it features an onboard video processing unit thats specialised at running vison models, and lets you pipe the data out via USB (along with a recording via UVC).
Thanks for your comments Liam. I will look into the Oak-1 as it is more suitable for my application. Since there is a trade-off between hardware tracking performance and hardware size, I was hoping to improve performance with better code instead of hardware. I’m a Python novice, but will attempt threading on my Pi3A+, i.e. threads for capture, write, track, show, which will hopefully improve tracking performance.
The error message is telling you that CMake could not find a file named CMakeLists.txt in the current directory (/home/francois/build). CMake uses this file to figure out how to build your project and won’t know what to do without it.
Here is how you would resolve this:
First, ensure that there is a “CMakeLists.txt” in the directory where you’re invoking “cmake”.
If there is no “CMakeLists.txt” at all, then it seems that you’re trying to build a project that wasn’t designed to be built with CMake. These might be in a README or INSTALL text file in the project, or on the project’s online documentation.
Hi Tim,
I’ve tweaked your awesome script to work on a raspberry pi5 utilizing Bookworm OS. The facial recognition part is working as expected. I bought an Adafruit servo bonnet prior to finding your example and trying to get the hat to work with the code. Can you clarify a few things.
following lines are just arbitrary variables?
cam_pan = 40
cam_tilt = 20
and following lines are angle settings for the servos?
Turn the camera to the Start position (the data that pan() and tilt() functions expect to see are any numbers between -90 to 90 degrees).
pan(cam_pan-90)
tilt(cam_tilt-90)
for Adafruit bonnet, setting servo angles uses the following:
kit.servo[0].angle = 90 for pan
kit.servo[1].angle = 90 for tilt.
I haven’t been able to figure out how to utilize Adafruit lingo other than setting initial servo angle.
Can you provide guidance please??
From the comments above these in the code, they look to be the angle in degrees.
The value can go from 0-180°
These functions set the position of the servos.
It looks like the functions are expecting a -90 to 90 value so the -90 takes the 0 to 180° value and converts it to a -90 to 90 value.
I take it you’re using an Adafruit 16-Channel PWM/Servo HAT instead of the Pimporoni HAT.
In that case, from the Adafruit Product Wiki, The angle required for the HAT is a positive integer so you don’t need then cam_pan-90.
You should be able to replace the Pan() function with this:
kit.servo[0].angle = cam_pan
You will also need to update this section in the for loop as well with the Adafruit language for it to work properly.
# Update the servos
pan(int(cam_pan-90))
tilt(int(cam_tilt-90))
I hope this helps get you started with the project.
Hi Tim,
Just found your tutorial and thought it would be great to maybe incorporate it in to a Star Wars Pit Droid that I have printed. I am using the precise hardware that you used in your tutorial, but it seems a section of files are missing (libpng12-dev, libjasper-dev) , maybe it’s me as I am relatively new to PI programming or it might just be time has moved on with versions etc
You mentioned that you might make a script/ batch file that will runs through all the installation prerequisites. This sounds so handy, did you make such a file in the end?
Kind Regards
Adrian
After some light research, it seems like these missing files can be related to the files that are packaged with specific Pi operating systems. Are you using an older version of Pi OS? We have a tutorial for this process here:
Otherwise, you can always manually install these files using the following commands however this is a little riskier as this process could overwrite other required system files.
‘libjasper1’ is the current replacement version of ‘libpng12-dev’ included with the current Pi OS version which would be the source of your issues. I would definitely recommend downgrading to Buster OS if possible!