I’m looking at a way to capture images and perform OCR on packaged goods moving along a conveyor line for a University project I am working on.
My current setup:
Original Pi camera (equiv to Omnivision 5647) with IR Leds (removed IR Leds due to reflection in images)
Floodlight on a stand
The current flow is:
Take frame from camera
Detect the object (using TFlite)
Crop image into area I want to perform OCR on (proportionally)
Preprocess image for OCR
Perform OCR using tesseract
Currently object detection works from a video stream but isn’t functional. The frame rate, processing time and image quality are poor. The conveyor isn’t fast but I’m hoping to get processing down under a second to allow for additional functions to be added.
Due to my current hardware limitations, I’ve been looking into upgrading to the new Pi and getting a better camera, though I’m not sure about the trade-off between frame rate, quality and processing speed.
Was thinking about:
Pi 4 8GB + whatever is good/best for cooling
Pi HD camera with 16mm lens
portable power supply for testing in the field (USB C PD ideally)
But I’m not sure if it’ll achieve what I’m looking at doing.
I’m open to any other advice you may have on this too, currently using python to write my code.
I’d definitely recommend checking out the Oak-D Lite or utilising a more powerful Raspberry Pi Single Board Computer. The great thing about the Oak-D Lite is it will let you offload the computational AI processing away from the Raspberry Pi, letting you reach higher framerate (and it has a 4k Camera! Perfect for getting crisp images of text)
For your project, I would also be tempted to modify the CoCo library and add the objects you want to be identified to it. You can customise AI libraries for free using Edge Impulse - https://www.edgeimpulse.com/.
With a trained library, you will be able to simply your flow to identify the object, take a snap and then pull the string text data from it.
Thanks, I definitely think this will be the way to go even if I do upgrade the Pi. I purchased a Google Coral a short while ago. It’s meant to show up next week sometime so I haven’t yet been able to do any tests on the improvements I get on performance.
I haven’t worked with the Coral myself, but “word on the street” is that it’s tens, hundreds, or even thousands of times faster depending on what it’s replacing (CPU or GPU). Stock is hard to come by, hence no longer stocked in our store, so I linked the OAK-D instead, but I’m confident you’ll love the Coral.
Saw the video of the Oak-D Lite earlier but didn’t realise that it was able to run so much of the processing onboard. I’ve currently been using the TF-lite model maker to train my models and that has been quite successful and it appears that the models can be converted for the Oak-D. albeit with a bit of stuffing around.
I am trying to detect the packages as well as being able to differentiate between them (primarily based on shape, as there are many colours). So far I’ve been using labelimg, but started to use CVAT (using object tracking for frames in a video) as well as a python script to make the files compatible. I’m open to using other tools but don’t really understand what edge impulse will allow me to do. If you could elaborate on that I’d really appreciate it.
The project should be finished prior to mid next year. Hopefully it’ll be polished and will make a nice write up by then.
Yeah that’s perfectly understandable, it wasn’t easy to find one. I do wish I had asked earlier, as the OAK-D satisfies the second problem I have which is the camera. It seems like overkill to have both but I may go that way if the camera continues to be a problem.
I wont hold you to it, but do you know if there is likely to be any greater supply issues with the OAK-D/Pi 4 in the near future? Hoping to do some testing of that and the Pi 3s performance before making that decision.
Unfortunately, the shortage has been super hard to predict. Stock levels of Pi 4s are currently not great, but we are expecting more in early-mid 2023. 2GB Kits are currently in stock here in Newcastle:
My advice when prototyping is to err on the side of picking up gear, but everyone has different attitudes on this. I managed to snag a Jetson Nano when the shortage was just getting going, and while I only needed it for experimentation, it’s reassuring to know I have one on the shelf if I need it.
And you can get our latest projects and tips straight away by following us on: