What to do if I wanna use this camera with the Raspberry Pi Pico?
implementation will depend on your applications.
Maybe/Probably too memory heavy for the pico
Although now that Iāve written that someone will for sure figure it out how to do it
Hey @ahsrab292840,
@Pixmusix is on the ball here, this camera is only compatible with Raspberry Pi Computers like the Pi 5, Pi 4, and Zero 2W. The Pico is a microcontroller and isnāt powerful enough to run it, nor does it have the software to do so.
Hey @Scott294214,
Itās quite a jump from coding an ESP32 to running machine vision, but itās not impossible. If you wanted to use a pre-trained model from the internet (like ones to detect people, animals, cars and busses - common things), then itās actually really easy to do so.
It sounds like you have a good set-up with the speed of the conveyor belt to run some computer vision, however, identifying your own custom objects can be quite an involving job (but again not impossible!). The first step would be to get your hands on a model trained to identify the things you want to identify. Here is a good repository of custom community models to check out in case someone has done something similar, but it sounds like you are trying to identify something really unique so not good odds there.
You can also train a model yourself, it needs some serious hardware to do so (with an RTX 4070TI it takes me about 2-5 hours to train a model), but you can use services like Roboflow which let you use their hardware to do it online, AND they have free options available. It is quite an involved process of getting lots of images of what you want to identify, labelling them, and then training the model, but they have a guide here. Again it is involved, but can be quite fun. Also, I would go for a YoloV11 or V8 implementation of the model, the video goes over v11 - this is just the architecture of the object recognition model.
Once you have your model I would then test it out, that Roboflow video might have some testing instructions in it, but if you really want to test, you can run it on a regular computer with a webcam attached. Ultralytics (the peeps who currently develop Yolo models), have a guide on it, but ChatGPT or Claude or any LLM should also be able to help you set this up.
If it all looks good there, you can then deploy it on some hardware for your final setup! I would actually recommend against using the AI camera as it is very hard to use your own custom model. The Pi 5 (a small computer that runs Linux and you can plug cameras and other hardware into) by itself is powerful enough to run a Yolo model at a low framerate - no additional hardware or bells or whistles needed. It is also the easiest option to add your own custom model to a VERY LARGE MARGIN. We have a guide on setting it all up, and you can just drop and drag in your custom model to use instead.
This is just laying out the road map of going down the machine vision path, it is a robust solution but can be quite involved if you wish to search for another solution.
Cheers!
-Jaryd
Hello everyone. I have a trained model with over 4k images that I trained on Google Colab, and was about to convert it to IMX500 format from there. HOWEVER, it was a YOLOv8n, and when running inference, it was honestly lackluster and didnāt detect everything it should.
2 questions:
If I train it on Yolov8m or larger, will that be suitable for the IMX500 conversion? I tried it with the same process, and it didnāt allow for an upgrade. Is it the model or an issue on my end?
Could the nano model be hurting it, and should I upgrade it, as stated above?
All help is greatly appreciated.
Hey @David293712,
An issue you might be facing is the limited memory on the AI camera to load the model onto. I canāt remember exactly how much it has but I think the biggest model I could load was about 8 mb. I think your medium-sized model may be a little too large but the small one might work? I was previously able to get Yolov8s models working.
Also solid work on converting it to the format for the IMX500, itās a bit involved!