I’m starting to put together a photogrammetry scanning rig. The basic idea is to have a dome of 80 to 100 cameras shooting inward to a person standing in the volume.
The base hardware idea is
80 x Raspberry Pi Zero W
80 x Pi High Quality Camera
80 x 16mm c mount lens
Eventually I will try to do a cross polarised lighting scenario and will need Circular Polarising filters mounted on each lens and polarising film on all the lights.
The idea is to trigger all cameras and lights and send the images back to a base station computer through WiFi.
There will be bandwidth issues with that volume of WiFi being used that I will need to experiment with.
Firstly I plan to do testing with 6 cameras and get the software on its feet to do all the sending of the data, then upscale from there.
Hi, welcome to the forum!
That sounds like a cool rig. Is there a limitation that requires you to use such a large number of cameras?
I’ve seen similar rigs that find it easier to move the camera and just reshoot the image from multiple angles that way.
For example: DIY 3D Scanner - Fully 3d printed photogrammetry rig - YouTube
Hey, Welcome to the Forum!
This seems like a super cool project! Keep in mind that it might be a good idea to have a single camera that moves, as WiFi gets progressively slower the more devices use it. (And you’ll save a lot of money on cameras).
To transfer a full-res PNG Image from the all the Pis to the base station would move about
37MB * 80 = 2960MB (~3GB!!) of data. That would take about 10 seconds based on the ~35mbps maximum of the Pi Zero WiFi
Just a thought to consider before you throw thousands at cameras. Cool project idea though!
Yep a moving camera and less of them would be way cheaper for sure but I need the person being scanned to hold a pose with no moment so the only real way is to have a heap of cameras all triggered together.
This is aimed at the film industry to take full body scans in costumes.
Other rigs that I have been on film sets use this sort of thing with character expression capture and I scanned a horse on my last film this way. Unfortunately its just going to be expensive.
Its a lot of data for sure.
I have a fella looking into the coding and the router config for me now.
We will most likely need to stack it over multiple routers and feed that info back to possibly more than one computer, dunno yet, but your right on that band width issue for sure.
Some enterprise access points from Unifi or similar are rated to hundreds of users, so that might be a good option to keep it simple with a single unit
Thanks mate, I will take a look at them
Just curious as to which photogrammetry software you are using for your project, I have been using 3DF Zephyr Pro, Agisoft Metashape Professional and Context Capture with the majority of my reconstructions being of inanimate objects using just 1 camera with a fixed focal length and 70% overlap in each frame. The software also has a using video feature to then create separate frames for processing / reconstruction.
Maybe a cheaper starting option might be to video the subject as you walk around them ( VERY SLOWLY ), this may help with getting the frames you require and the subject only has to remain motionless for a minute or two - Might be worth experimenting with ???
I have been using metashape (back when it was called photoscan) for a long time now and know it inside out.
I have been experimenting with Capture reality a bit now as it has a cool feature where you can mix Lidar and photogrammetry, getting better colour for environment scanning. Throwing in some drone stuff is also good for those hard to reach scanning areas like tops of buildings.
Video is just not an option for me. I cannot get the subject to hold still enough for the capture.
I have been in the VFX film industry for almost 20 years now and have seen many of these rigs but I’m really trying to make this one a little more portable to take out on location and shoot the actors after their takes.
Head more down this path right now (pics from the net)
Your miles ahead of me experience wise.
Just a thought but could you make a cylindrical cage with X amount of cameras that rotates around the character to minimise the number of cameras required, say for example a rotation that has 4 to 8 positions, the character would only need to be motionless for less than a minute, all depends what budget you have as I am used to working with minimal resources.
The Pi HQ Cameras have an external sync for stereo videography stuff - I imagine it’ll also be useful for this though the timing isn’t as critical if this is single shot photogrammetry.
Software wise you should be fine with WiFi - off the top of my head, maybe you could do a local clock sync with a preset capture time a few seconds later to work around signal propagation delays.
Then capture and store the image locally before downloading all the images to a central machine for processing.
The hardest part will be focusing the cameras - the pi HQs are manual focus - I’m not too sure how best to go about that
Back again - sorry FuryFX,
Would using video cameras work in a rotating cage design, similar to your 1st post picture of the cylindrical enclosure, minimal character exposure time with plenty of overlapping images.
Just throwing different idea`s out there
I personally haven’t had a great deal of success using video footage to reconstruct objects but that is most likely due to poor video equipment and minimal experience trying to use video, my video results have generally been that bad that I resort back to single camera use, fixed focal length and manually rotating the object as well as distance adjustment to maintain clear focus on each shot. I am currently trying to build a mini rotating platform that has servo driven rotation with 10 degrees rotation per step, and another servo to drive a forward / reverse of the object on the platform using an old RC controller joystick.
My limited photogrammetry skills are self taught - ( a lot of oop`s shouldn’t have clicked that ), and mainly use it to convert lure makers finished prototypes from a timber lure to an Stl format, make whatever adjustments are required for the client and then they can use that file in an automatic lure turning lathe, load the file, hit go and load a 6 mt length of wood onto machine feed rollers and let it do its thing.
Thanks Oliver, yep there will be a bit of setup getting this thing tuned and focused.
Determining what cameras need a focus tweak will be a bit of a software design challenge. Im not that far into it yet but will hit that wall soon enough for sure
Yep there are scanners that spin video cameras around the subject. Like this…
They totally work but are still too slow for facial expressions to be held while the capture take place.
With all the cameras synced you can do more of this sort of thing.
This is super important when you are making a digital puppet of a person for animation.
I have had some success in the past using video footage to scan with but it has been mostly drone footage for terrain reconstruction.
Its usually too low in resolution (mostly only HD) and has heaps of artefacts in the image from compression settings and grain from the higher ISO setting needed to capture video…G
And thanks for taking the time to put a detailed response together.