Getting Started with YOLO Object and Animal Recognition on the Raspberry Pi

How does this project perform in real life? I used YOLO for an ESP32-CAM-based object detection project. I saw that the performance was heavily dependent on light.

1 Like

Jaryd; Thank You for the superb video/guide. Spot on!!!

Have you got YOLO11 converted to NCNN? Same tool?
Two years ago, these FPS were achievable with projects from the Coral project. What the heck can i do with that device!? Have to switch to the AIHAT?
What other rpi CV stuff is around? Your favourties, anyway…

Please and thanks.

2 Likes

Hey @ahsrab292840 welcome to the forums!

This is going to depend a lot on your set-up. The default settings on the Pi 5 can give about 5 FPS and it isn’t very dependent on light. In the written guide we go through a few ways to increase the processing speed of it, and we get a good 30 fps out of it, but it can also be pushed more.

This is also going to be a higher resolution and possibly more powerful model than what is run on your ESP32-CAM-based project.

1 Like

Hey @Aeon293088, thank you for the kind words!

We were able to do all the conversion stuff with YOLO11 just fine. As of now, only models made by Ultralytics can be converted with their method (so YOLO5, YOLO8, and now YOLO11).

The Coral could still give this a run for its money! But there is a lot of optimisation and a few corners cut to get this FPS on the Pi 5’s silicon. The AI HAT is even more incredible as we were able to run some much larger and more powerful models at 30 fps and with only 10 Watts of power! However, both the AI HAT and a Coral TPU would have different workflows than what we cover in this guide, its not as simple as enabling the processing of this script to be done on it.

As for more guides, we have a guide on using the AI HAT if you wish to check out the workflow for it. It is a little more involved that this guide but still straight forward.

We have a pose estimation guide as well where we make space invaders controlled with your nose. If you have done this object recognition guide, you have already set up 90% of this guide.

We also have a face recognition guide. Its a bit low-tech compared to the YOLO models but can be useful none the less.

YOLO is really the creme of the crop here and I would also check out their YOLO11 page to see all the model types that are available. This guide set up a lot of the needed libraries to use them and it should be as easy as using their name (the pose guide is a good guide on how to do this). There are a few other tasks that it can do like describing whats in an image and segmenting it with vectors.

If you have any more questions let us know!

2 Likes

Thanks Man.

i’ll tell ya, Jaryd: The Goal is actually emotion detection on the pi (to instantiate a kinetic process, like your solenoid).

I’ve found some yolo8 based emotion detection, but haven’t “hit paydirt” yet.

I tried using the World yolo, and prompting “happy face” “emotion:happy” etc etc… no luck there.

What do you think? Emotion recognition is an uncanny place. . .

1 Like

I think your best route would be to find someone who has pre-trained a Yolo-based model for emotion detection. If you can get your hands on a model with a .pt model format, you should be able to just download it and put it into the project folder with all the other files. Then change the line in the main script to:

# Load YOLOv8
model = YOLO("best.pt")

And it should run! You can change the name of the model in the folder, and in this line if you need as well. I’m using “best.pt” as an example as its the default output name of a trained model.

I’d take a good look around but here is a YoloV5 implementation that might run a little slower than a v11 implementation - you’ll find the model in the models folder, ignore the set up and running instructions you can use this guide and the method above.

There is also a file in this v11 repo called best.onnx. I think the set up from this guide can run ONNX? If not you should be able to use the NCNN conversion method to turn it into an NCNN model that it can run.

Let us know how you go! It’s been on our list of videos to eventually get around to!

1 Like

well guess what (x2)!?

  1. above code ran the best.onnx from that v11 repo, just downloaded itself some onnx stuff. 5fps.
  2. The modeling must have been very very deep, because it only sees “Sad” even when i fake a smile. . .

I’m’a keep hunting.
I’ll try the onnx to NCNN conversion, jfk.

Let me know if you come across anything,please and kindly

2 Likes

Thanks. Once I succeed setting up my Raspberry Pi, I will give it a try.

2 Likes

Hello. I am a big fan of this project, but was wondering if anybody knows how to get it to run as soon as the pi is turned on? thank you

3 Likes

Hey @David293712
Welcome to the forum.
Not the only way, but try the search term “crontab”

2 Likes

When I try to install on a CM5 with 4 gb ram and 32gb mmc, during the pip install ultralytics[export] I get an error
error: resolution-too-deep

× Dependency resolution exceeded maximum depth
╰─> Pip cannot resolve the current dependencies as the dependency graph is too complex for pip to solve efficiently.

hint: Try adding lower bounds to constrain your dependencies, for example: ‘package>=2.0.0’ instead of just ‘package’.

Is it easy to fix?

1 Like

Hi Dave.

Welcome :slight_smile:

I think I understand that error but to be sure I’d love to know exactly what command you ran that led to that error. Can you paste the command here? Preferably I’d love to see the full error as well.

I think it may be because I did not try it on a fresh install. Since then, I have been unable to flash the cm5 for some unknown reason and progress is halted until rpiboot recognizes my cm5 io board. Sigh. I will update if/when that might happen. Sigh, again. Thanks for your time.

1 Like

I repeated the error on a fresh install on the CM5 with 4gb ram and 32 gb mmc. Flashed it, setup vnc, expanded filesystem and rebooted, then proceeded with your instructions. All went well until the Anaylitics install and then this happened: I cutoff at the last successful download.

Downloading https://www.piwheels.org/simple/jax/jax-0.5.1-py3-none-any.whl (2.4 MB)
â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” â” 2.4/2.4 MB 4.2 MB/s eta 0:00:00
Collecting importlib_resources>=5.9.0 (from tensorflowjs>=2.0.0->ultralytics[export])
Downloading https://www.piwheels.org/simple/importlib-resources/importlib_resources-6.5.1-py3-none-any.whl (37 kB)
Downloading https://www.piwheels.org/simple/importlib-resources/importlib_resources-6.5.0-py3-none-any.whl (35 kB)
Downloading https://www.piwheels.org/simple/importlib-resources/importlib_resources-6.4.5-py3-none-any.whl (36 kB)
Downloading https://www.piwheels.org/simple/importlib-resources/importlib_resources-6.4.4-py3-none-any.whl (35 kB)
Downloading https://www.piwheels.org/simple/importlib-resources/importlib_resources-6.4.3-py3-none-any.whl (35 kB)
Downloading https://www.piwheels.org/simple/importlib-resources/importlib_resources-6.4.2-py3-none-any.whl (34 kB)
error: resolution-too-deep

× Dependency resolution exceeded maximum depth
╰─> Pip cannot resolve the current dependencies as the dependency graph is too complex for pip to solve efficiently.

hint: Try adding lower bounds to constrain your dependencies, for example: ‘package>=2.0.0’ instead of just ‘package’.

Link: https://pip.pypa.io/en/stable/topics/dependency-resolution/#handling-resolution-too-deep-errors
(yolo_object) Pi@CM5:~ $

Appears to be the same error as before.

specifically it stopped while executing this

pip install ultralytics[export]

Is this what you typed into the console? :slight_smile:

Hey @Dave294448,

This package installs quite a lot of dependencies and can be prone to giving errors in this step. Sometimes you need to run the install command a few times to get it to work. If you run the command multiple times does it fail at the same package installation each time or does it progress?

Let us know how it goes!

1 Like

It failed each time at the same place.

Hi @Dave294448

I’ve just tried to install the package on our test Pi5, it looks like the issue isn’t limited to just you and appears that it is a more widespread issue. We will have more of a look into it and get back to you with any updates we come across.

1 Like

Thank you.