Pose Estimation and Face Landmark Tracking with Raspberry Pi and OpenCV

Hey all, just tied the bow on Pose Estimation and Face Landmark Tracking with Raspberry Pi and OpenCV.

Furthering my quest for complete knowledge on artificial intelligence with Raspberry Pi the natural next step was to investigate Pose Recognition (Human Keypoint Detection) and Face Masking (Facial Landmark Recognition) with the formidable Raspberry Pi single-board computer. Machine and deep learning have never been more accessible.

Face Masking is a computer vision method that will exactly identify and map the geometry of your face which can then be represented by dots and segments across all your features. Doing this means it will know exactly where your eyes are in relation to your eyebrows or your nose in relation to your lips. Using very similar geometry mapping principles, Pose estimation expands on this by identifying the location of every key part of your body. I demonstrate how to set these systems up and how to edit the code so you can pull location data from the system.

Read more

2 Likes

Hey all - Iā€™ve looked on all forums and posts and just now going around in circles. I have been able to get other things working, such as facial recognition using OpenCV.

Went through the tutorial (Pose / hand tracking - OpenCV, MediaPipe)

Although no matter what I seem to do, I always get the following when trying to run the scripts (which are listed in the link above):

Traceback (most recent call last):
  File "/home/pi/posetrack.py", line 3, in <module>
    import mediapipe as mp
  File "/usr/local/lib/python3.9/dist-packages/mediapipe/__init__.py", line 16, in <module>
    from mediapipe.python import *
  File "/usr/local/lib/python3.9/dist-packages/mediapipe/python/__init__.py", line 17, in <module>
    from mediapipe.python._framework_bindings import resource_util
ModuleNotFoundError: No module named 'mediapipe.python._framework_bindings'

Iā€™ve found other people who have had this error and tried any suggestion. Iā€™ve reinstalled ALL the things

Any ideas please?

3 Likes

Hey mate,

Double-check for me that you have typed and entered the below lines into a fresh terminal. I reckon once you do that itā€™ll be good to go.

sudo pip3 install mediapipe-rpi3
sudo pip3 install mediapipe-rpi4
sudo pip3 install gtts
sudo apt install mpg321

Also for good measure this one too.

sudo apt-get install python-opencv python3-opencv opencv-data

3 Likes

Thank you aaaaaand nope. Iā€™d say I am trying to do too much with one board.

In the meantime, I have been able to get examples of tensorflow lite pose estimation going, even if I cant trigger them via node.red, but I really think itā€™s more a case of dividing what Iā€™m trying to do between Pi boards. Iā€™ll try a fresh install on a clean one so to not have any chance of conflicting dependancies.

Thank you again for the post! A shame it didnā€™t work, but I think its more of a ā€˜meā€™ jamming too much in there, than an actual ā€˜Piā€™ not being able to do it style situationā€¦

1 Like

Hi Alison,

I had the same issue regarding the mediapipe import error. I was able to work around it by uninstalling mediapipe-rpi3 and mediapipe-rpi4, then using pip to just install mediapipe generally like this:
ā€œpip install mediapipeā€
I am sure this will come back to get me eventually, but for these examples it worked fine!

2 Likes

Hi Andrew199192,

I tried your workaround with no success, I got the following (after completing the first stage: unsinstalling both mediapipe-pi3 and 4):
pip install mediapipe
Looking in indexes: Simple index, piwheels - Simple index
ERROR: Could not find a version that satisfies the requirement mediapipe
ERROR: No matching distribution found for mediapipe

So Iā€™m wondering how you managed to make this examples work (on RPi), hm? They do work on my PC (Linux Mint 20.3), but I wish to make a small project with them on RPiā€¦

My RPi hardware is: Pi Model 3B V1.2, Iā€™m running Raspbian GNU/Linux 11 (bullseye). UPDATE: Iā€™ve tried to run this on smae machine with different OS version (Raspbian GNU/Linux 10 (buster)) with SUCCESS (itā€™s slow, but OK) - as was mentioned at the beginning of the article. But buster is getting older, anyone knows how to make it run on bullseye?

Best, Krzysiek

3 Likes

Hey mate,
Glad to hear you have had success with Buster OS. Teams of people are no doubt working furiously on getting machine learnt systems to work and be stable on Bullseye OS and Iā€™m sure they are getting very close. The Buster OS Pose Estimation and Face Landmark tracking versions that I have shown above are quite stable. I have more machine learning guides on the way which will ease the transition between Bullseye and Buster OS.
Kind regards,
Tim

3 Likes

Thank you. I find the OpenCV installation process quite time consuming and difficult. May be someday Iā€™ll try your AI based tutorials

1 Like

Hey Tim,brilliant codes~
Do you know how to extract ROIs(region of interest) like eye rois,nose roi,cheek roi,mouth roi(normalised rectangular rendered image)using OpenCV & MediaPipe?Just a tip not entire codes.

3 Likes

Heyya mate,

A lot can be learned directly from the Google MediaPipe Website in regards to their machine learnt system - Redirecting. This face mesh script identifies 468 3D face landmarks so you can definitely unpack that and turn a group of close-knit landmark points into ROIs.

If you are interested in pinpointing these regions and nothing else (for instance the Eyes) check this variant of the machine learnt system here - Redirecting.

Hope above helps :blush:,
Tim

4 Likes

(post deleted by author)

Glad you responsed~Iā€™ve actually searched amoug the Mediapipe pages these days,although not entirely understand what each line of codes means,I gradually figured out that they use some certain anchor points to define the ROIs,and I successfully draw my face segmentation ROIs with the same pipeline.

Now that Iā€™m doing a performance art-walk with a wearable device later to be invented,I still need to distribute these Rois to corresponding display screens in my next step,how to do that with which kind of python codes is what Iā€™m pondering now. So emmmā€¦if you have any tips to help me,just do thank you!

Again,your tutorial has lent a tremendous inspiration to my work,Thank you!

2 Likes

Hey mate,

I love this idea, you could really get some wild, Dutch angle, close-up videos of your target face outputted to the monitors. Turning a target ROI into a new preview window is definitely the next challenging hoop to approach.

The closest code Iā€™ve seen that does something similar is this script here for a Oak-D lite and Raspberry Pi System - Lossless Zoom By Luxonis. That Python Script evaluates the footage from a 4K camera, identifies a Human target as the ROI, and then outputs the ROI to a preview window. There are a lot of translatable ideas from your project and that system, however, youā€™re going to have to dive in and figure out how to modify it appropriately.

I talk a fair bit about Oak-D lite camera modules here if you are interested in that hit up this link - Integrated Computer Vision Package - OAK-D Lite With Raspberry Pi Set Up

Hopefully, that will help you along the way :blush: Iā€™d love to see this finalised project end up on our Projects page, so definitely pop back again if have any other questions.

Best of luck and thanks for your kinds words,
Tim

4 Likes

Hey Dou,

Iā€™d also check out Raven Kwok - an amazing artist.

Hereā€™s one of his works that uses OpenCV to move windows around (and a sneak peak of his code)

I recall another project that he did had different ROIā€™s mapped to windows but cant seem to find it anymore.

Liam

4 Likes

Hey Tim,sorry for the late response.Iā€™ve been digesting the nutrition of your last note recently. :laughing: Apparently Iā€™m just a beginner in python and computer vision field,so I definetely need some time to catch up with the new revelations here.
Let me elaborate my on-going work to you.In short,I intend to make an ā€˜amorphousā€™ mask to disguise myself from surveillance system around us,and I think, why donā€™t I use pedestriansā€™ faces as my transformation materials.I can switch my facial feature entirely or partially to otherā€™s,disguising myself from CCTVs by confusing them.So I need a camera to film when Iā€™m walking,meanwhile use codes to seize passersā€™ faces as my material,and output the amorphous organs into seperate screens,just like a syber protestor.
After comparing OAK-D Lite camera to 6mm & 16mm cameras,I may choose 16mm.Oak-D absolutely has largest resolution and least distortion,while 6mm is just opposite,but appearance also matters(I donā€™t what the style to be too modern,I want to keep it in a junky style),thus 16mm ā€˜single-eye lookā€™ camera seems fit.The code of this Luxionis dude and the Artist Liam recommanded seems solve the transmitting part.And for screens,I have a question,how many micro-monitors can function simultaneously connected to one 4B Pi?
Iā€™ll be very glad to share it with you when I finish,Thank you for your warm help all along.

Hey Liamļ¼Œ
I have an impression of him, his work is interesting, he uses Processing to make these animations.What you are talking about should be the one he often uses as his avatar, but unfortunately I didnā€™t find it,either.
Thank you for your accurate recommendation~

1 Like

Follow up:
It looks like one 4B Pi can connect to 3 screens at most,so if I want to output 6 different content on 6 screens,I need at least 2 Camera-2 Pi-6 Screen as a set.I donā€™t knowā€¦

Hi Dou,

Great idea! Iā€™m personally quite the fan of the interesting tech that is being created to combat privacy concerns.

Iā€™m sure youā€™ve seen it before, but check out this low-tech solution:

-James

These approaches are very COOL~~~I like them,very inspiring and encouraging!Though I want to be more adversarial in appearance,you know, strange enough for a nature face but seems ok to cctvs. I want to use the faces of passers-by directly as the material, but I want to add some machine learning algorithms to keep the faces in constant and slow deformation. I imagine that the protection device in the future must be light and convenient. It may be lasers carrying anti-algorithms, or it may be a subcutaneous camouflage tissue. I hope my work can arouse enough alertnessļ¼ŒI hope to use the existing techniques with limitation and inspire future works. Thanks!

1 Like

i have the buster os i have installed all the things, i followed the directions to a tee but it still doesnt work, i am getting an error code that says
ModuleNotFoundError: no module named ā€˜cv2ā€™

2 Likes