The code is in the written guide. Here it is though its short and sweet:
from ultralytics import YOLO
# Load a YOLOv8n PyTorch model
model = YOLO("yolov8n.pt")
# Export the model to NCNN format
model.export(format="ncnn", imgsz=640) # creates 'yolov8n_ncnn_model'
Thanks for replying earlier. We realised the camera we used didnât work properly so thatâs solved. However, now we have run into a new problem. We are trying out the code for YOLO world, and the raspberry pi 4 is taking a very long time to load. Is yolo world too complicated for the raspberry 4?
Where is it taking a long time to load? The YOLO world is an extremely large model (10-ish times bigger than normal model sizes), so the download can take a while. If it takes a while to load the model after downloading it may be a bit too much for a poor Pi 4 as even the Pi 5 can struggle with it.
I donât know many people who have tried YOLO world on the Pi 4 so let us know your results!
When using the code for the YOLO world it takes a long time when starting up. Nothing occurs even if we wait 30 minutes, there is no download or anything. So we may have to go another route, unfortunately.
We are going to try again with the yolov8 because that was working, however we wanted to ask if it was possible to set classes in your code. So that we can specify which detection is happening. For example for backpack and person and nothing else.
Sounds like Yolo World may be just a little too much for a Pi 4, thank you for letting us know!
In terms of specifying certain objects for detection, there are a few potential ways. I seem to of misplaced my microsd card with Yolo installed so I canât double-check, but one of these should work.
When you declare the model, you should be able to set the classes with:
# Load YOLOv8
model = YOLO("yolov8n.pt")
model.classes = [0, 24]
If that doesnât work, you might be able to specify it on the inference step inside the while true loop:
# Run YOLO model on the captured frame and store the results
results = model(frame, classes = [0, 24])
Here we are telling to only give us classes 0 and 24 which are people and backpacks. You can find the full class list with the associated numbers on Ultralyticâs site.
Also if you arenât using Yolov8 for a given reason, give Yolo11 a go! It came out a couple of months back and is about 10% faster and smaller. You can only have to change the model from v8 to:
# Load YOLOv8
model = YOLO("yolo11n.pt")
model.classes = [0, 24]
Note that there is no âv11â in the name, just âyolo11â, they seemed to of removed the âvâ once they hit the double digits.
The suggested program works perfectly. However, now I would like to use only the library that enables the detection of whether a forklift is present in the image from the camera, instead of the full library.
Iâm encountering errors because the forklift model is designed for YOLOv5, while the framework I installed is for YOLOv8. Iâm feeling quite lost.
The Ultralytics library that we set up and run in the demo is a framework that should run nearly every YOLO model including YoloV5. The GitHub repo you linked is an entire other framework to set up and run. You should however be able to grab the actual forklift-trained YOLO model from it and run it with the framework we set up in the guide, but I snooped around and couldnât actually find which one it was - itâs not well-documented .
Earlier you linked to some Hugging Face forklift models and those would work better here. There are nano, small and medium sizes from what I can see there and they are all Yolov8, and they are trained to only detect âforkliftâ and âpersonâ.
If you head to the âFiles and versionsâ tab of one of those models, you should be able to see a file called âbest.pyâ. Download it and put it into the project folder with all the other files. Then change the line in the main script to:
# Load YOLOv8
model = YOLO("best.pt")
And it should run! You can change the name of the model in the folder, and in this line if you need as well.
As to which size model you should use I would start at the nano and see if it performs well enough for your needs, and if it doesnât, try a larger model. The larger-sized models will run slower but should have slightly more accurate detections.
And Remember you also should be able to convert these models to NCNN for that free speed boost!
Hope this helps, let us know if you run into any issues!