Hello guys, how has your day been going. Pls my name is Philip and this is my first time using the forums.
I’m trying to work on a project idea (I’m a university student in his final year) involving a hand gesture-to-voice communication for mute people using computer vision on raspberry pi and I was wondering I could get some pro tips from some professionals here.
Hey @Philip283188, welcome to the forums!
That sounds really cool! Would you be able to tell us more about this project?
What made you choose Raspberry Pi for this and have you found or made any progress recognising gestures?
If you haven’t seen it already our guide on hand recognition using OpenCV may be useful to you!
Hope this helps!
Hey Mr @Samuel, Good morning. So one of the main issues mute individuals have is communicating with other individuals. Normally if they want to communicate, they would have to use sign language, but what if the recipient doesn’t understand sign language? That’s where this my project comes in. I want this project to help facilitate communication between mute individuals and normal individuals and it’s going to achieve this by recognizing the gestures made (depending on how I train the model) using computer vision and then producing an output voice signal. (I’m also thinking of adding an LCD display that will show what gesture was made in text format).
Sorry if this was long.
I’ve been doing a lot of research here and there and I’ve gotten a lot of help from those research papers. To make mine more unique, I was thinking of using both OpenCV and YOLO.
Hey Phillip,
Sounds like you have a good starting point for the project!
The guide Sam sent is a great place to get started with OpenCV and gesture recognition. I imagine you could train a machine learning algorithm to categorise gestures/convert them to language, and the topology of the project should be relatively simple.
Do you have specific questions, or options you’re unsure of?
P.S. Don’t worry about the posts being “long”, the more info the better!