Getting Started with YOLO Object and Animal Recognition on the Raspberry Pi

Hey Jim,

The code is in the written guide. Here it is though its short and sweet:

from ultralytics import YOLO

# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")

# Export the model to NCNN format
model.export(format="ncnn", imgsz=640)  # creates 'yolov8n_ncnn_model'
1 Like

Ok thanks. New to Python. Didn’t realize the text was the whole thing. Thought it was a partial script.

3 Likes

Hi Jaryd!

Thanks for replying earlier. We realised the camera we used didn’t work properly so that’s solved. However, now we have run into a new problem. We are trying out the code for YOLO world, and the raspberry pi 4 is taking a very long time to load. Is yolo world too complicated for the raspberry 4?

Thanks so much!

1 Like

Hey Sarah, glad to hear you fixed that!

Where is it taking a long time to load? The YOLO world is an extremely large model (10-ish times bigger than normal model sizes), so the download can take a while. If it takes a while to load the model after downloading it may be a bit too much for a poor Pi 4 as even the Pi 5 can struggle with it.

I don’t know many people who have tried YOLO world on the Pi 4 so let us know your results!

1 Like

Hi Jaryd!

When using the code for the YOLO world it takes a long time when starting up. Nothing occurs even if we wait 30 minutes, there is no download or anything. So we may have to go another route, unfortunately.

We are going to try again with the yolov8 because that was working, however we wanted to ask if it was possible to set classes in your code. So that we can specify which detection is happening. For example for backpack and person and nothing else.

Thanks so much!

2 Likes

Hey Sarah,

Sounds like Yolo World may be just a little too much for a Pi 4, thank you for letting us know!

In terms of specifying certain objects for detection, there are a few potential ways. I seem to of misplaced my microsd card with Yolo installed so I can’t double-check, but one of these should work.

When you declare the model, you should be able to set the classes with:

# Load YOLOv8
model = YOLO("yolov8n.pt")
model.classes = [0, 24]

If that doesn’t work, you might be able to specify it on the inference step inside the while true loop:

    # Run YOLO model on the captured frame and store the results
    results = model(frame, classes = [0, 24])

Here we are telling to only give us classes 0 and 24 which are people and backpacks. You can find the full class list with the associated numbers on Ultralytic’s site.

Also if you aren’t using Yolov8 for a given reason, give Yolo11 a go! It came out a couple of months back and is about 10% faster and smaller. You can only have to change the model from v8 to:

# Load YOLOv8
model = YOLO("yolo11n.pt")
model.classes = [0, 24]

Note that there is no β€œv11” in the name, just β€œyolo11”, they seemed to of removed the β€œv” once they hit the double digits.

Let us know how this goes, best of luck!

3 Likes

Thank you all for the information, but it seems my knowledge is insufficient to include the forklift model found here:

into the model using model = YOLO("best.pt").

I recently reinstalled the operating system on my Raspberry Pi. I’ve created a virtual environment and installed:

sudo apt install python3-pip -y
pip install -U pip
pip install ultralytics[export]

The suggested program works perfectly. However, now I would like to use only the library that enables the detection of whether a forklift is present in the image from the camera, instead of the full library.

I’m encountering errors because the forklift model is designed for YOLOv5, while the framework I installed is for YOLOv8. I’m feeling quite lost. :frowning:

Hey Milan,

The Ultralytics library that we set up and run in the demo is a framework that should run nearly every YOLO model including YoloV5. The GitHub repo you linked is an entire other framework to set up and run. You should however be able to grab the actual forklift-trained YOLO model from it and run it with the framework we set up in the guide, but I snooped around and couldn’t actually find which one it was - it’s not well-documented :frowning: .

Earlier you linked to some Hugging Face forklift models and those would work better here. There are nano, small and medium sizes from what I can see there and they are all Yolov8, and they are trained to only detect β€œforklift” and β€œperson”.

If you head to the β€œFiles and versions” tab of one of those models, you should be able to see a file called β€œbest.py”. Download it and put it into the project folder with all the other files. Then change the line in the main script to:

# Load YOLOv8
model = YOLO("best.pt")

And it should run! You can change the name of the model in the folder, and in this line if you need as well.

As to which size model you should use I would start at the nano and see if it performs well enough for your needs, and if it doesn’t, try a larger model. The larger-sized models will run slower but should have slightly more accurate detections.

And Remember you also should be able to convert these models to NCNN for that free speed boost!

Hope this helps, let us know if you run into any issues!

2 Likes

I am just trying to run YOLO WORLD demo code but nothing happens and i get a β€œprocess ended with exit code -9” everytime. Also, the program runs fine if i commented out the model.set_classes line

1 Like

Hey @tom289085,

β€œExit code -9” typically just indicates the system has shut the process down for some reason. It could be for a variety of reasons, including:

  • System (or process) running out of memory
  • System (or process) trying to find a file that doesn’t exist

It is not limited to the above, but these are some I have seen before.

I would think dropping the β€œmodel.set_classes” line has a large affect on system memory. Can I confirm you are using at least a 4GB variant of Pi?

1 Like

i was under the impression that i bought a 4gb Pi it is actually a 2gb model. Anyways, I ran it again, now the program runs. It detects objects that i specified. However, the frame would not pop up (frame as in what the camera is seeing, hope you understand what i mean). One more question i want to ask, how do i get access yo the list of object ids. Thank you!

1 Like

oh, nevermind, i found out how to get access to the ID list

1 Like

Hey @tom289085,

Glad to hear you got it working! Let us know if you have any other questions.

while using ncnn convertion iam getting like this

%Run β€˜ncnn covertion.py’
Ultralytics 8.3.75 :rocket: Python-3.11.2 torch-2.6.0+cpu CPU (Cortex-A72)
YOLOv8n summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPs

PyTorch: starting from β€˜yolov8n.pt’ with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)

TorchScript: starting export with torch 2.6.0+cpu…

Process ended with exit code -4.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

hey
i am using raspberry pi4 and iam unable to use yolo world
while using ncnn convertion iam getting like this

%Run β€˜ncnn covertion.py’
Ultralytics 8.3.75 :rocket: Python-3.11.2 torch-2.6.0+cpu CPU (Cortex-A72)
YOLOv8n summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPs

PyTorch: starting from β€˜yolov8n.pt’ with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)

TorchScript: starting export with torch 2.6.0+cpu…

Process ended with exit code -4.
───────────────────────────

Hey @p289569, welcome to the forums!

Unfortunately we have since found with testing that the Pi 4 can be a little too slow for Yolo World. It really struggles on the Pi 5 which is about 2-3 times faster than the 4:

However, the regular Yolo models should work.

And can I clarify that you are trying to run the ncnn conversion on Yolo World? Unfortunately, it cannot be converted to ncnn with the instructions in the guide. However you should be able to convert a regular Yolo model just fine on the Pi 4.

Cheers!
-Jaryd


Good evening everyone.
Please :folded_hands: help me, I keep getting these errors when trying to run the ncnn conversion.
Is there any way around?

Hey Testimony, welcome to the forums!

I haven’t seen this error before but after some digging, I’ve found a similar issue that someone was having. It was caused because Ultralytics rolled out a new version between installing it initially and trying to download more dependencies. Although this should rarely be an issue which is strange.

We have found that with the Ultalytics packages, it can be tricky to manually repair and a fresh install is usually the best option. I would suggest creating a new virtual environment (with a different name), and carefully following the installation instructions again and seeing if this fixes your issue.

Let us know how it goes!
-Jaryd

1 Like

Thanks for the amazing article! Still the best one I have found about RPI and object recognition so far.
I wanted to ask: what would be the best way to train my own model? My camera is mounted in a bit specific location and angle, so the regular models tend not to work well.

Hey Karl, welcome to the forums!

Thank you for the kind words!

Training your own model is quite involved and can be a bit tricky, but it’s not impossible. Ultralytics have a guide on how to do it. To train a model you will also need some more serious hardware. We tried training a model on a Pi 5 as a joke and it took 3 days to train 0.2% of a model :joy: You are probably going to need a decent GPU to train the model in a reasonable amount of time, on an RTX 4070 it took 2-5 hours. You can however pause the training and resume it later if your hardware will take a few days.

Another thing to check out would be Roboflow. They have some free options that let you train models on their servers and it might be a slightly easier route, I’d check out both and see what looks best for your case!

We have it in our pipeline to make a guide on how to train a model, but its quite a challenge to distill such an involved process into an approachable guide.

Best of luck!

3 Likes