As the saying goes “Long time listener, first time caller”…
I’m working on a project to separate, identify and sort Lego parts from a source container of mixed parts. I’m leveraging previous projects that have been posted online (such as Daniel West’s Universal Lego Sorter link)in terms of the mechanics, and have separated the design into stages that will each need motors &/or sensors. The current plan is to have a Raspberry Pi CM4/Carrier board combo run as the core of the project.
Stage 1: Source bin and feeder to stage 2
My current thinking is to use a linear actuator to push one side of an angled container whilst a vibrating motor shakes the contents to reduce friction (based on a volume of particles displaying liquid like properties when vibrations are passed through, e.g. sand). Output with be pushing a small quantity of parts into stage 2 for physical separation. A sensor in stage 2 will act as one of a series of on/ff switches for stage 1
Stage 2: Separator.
A v-shaped channel, with fixed support at the exit end, suspended via springs at the opposite end, using vibration to shake pieces down and onto a conveyor (Stage 3). I think the way to go is to use a break beam sensor to detect the drop of this section and send an off signal to the motors in the previous. Another break beam sensor at the exit end, signalling a stop for the vibrating motors in the stage 2, and activating stage 3.
Stage 3: Identification
I’m thinking a conveyor through a camera box, exiting to a funnel with a variable position exit chute.
Another 2 break beam sensors (one to detect a part in the camera box, another to detect it’s exit into the output chute, and a Raspberry Pi camera to capture an image for identification (this will be sent over network using a queuing broker to a machine learning server).
Once the ID-query-response returns a result (either matching sort parameters or default null), restarts the conveyor to eject the part.
Stage 4: Sort Output
Here I’m thinking a stepper motor to control an output chute by rotating through 180 degrees into a series of output bins. A final break beam sensor in the output chute to signal Stage 2 to output another part into the system.
From the Raspberry Pi out, here’s what I think I need:
- Motor Control Hat (or multiple, stackable) capable of controlling all the motors, ideally with software libraries for Python/Linux
Vibrating motor x 2 (perhaps 4? if paired would be more effective)
Conveyor motor (standard servo or DC motor)
Break beam sensors x 4, or alternative. Support software libraries in the same language as the motors would be ideal to keep things as simple as possible
R-Pi CSi connected camera/lens
Based off Daniel’s project, I plan on using a Tensor Flow convolutional neural network, with MQTT brokering between the AI engine and the Raspberry Pi.
Any thoughts or advice on this, especially around the Pi controlled hardware, is appreciated.