How to take photos simultaneously from two widely separated cameras?

This idea might not go further than a thought project.

Have you ever watched a large flock of starlings before dusk as they whir around making fascinating shapes? Murmurations they’re called.

Looking from one position you only see a two dimensional projection and you see no depth. I thought it would be interesting to get two views of the flock at the same instant, say within 0.1 sec.

The cameras would be hundreds of metres apart making maybe a 90 degree angle with the flock. Probably one camera would be unattended and the operator stationed at the other. The images would not need to be transmitted altho that cud be useful.

Any thoughts on what cameras to use and how to trigger them simultaneously?


Oh I’m going to leave camera suggestions to folks with more background in that but this reminded me of these time-lapse pictures of murmurations… the idea is poetic :}


Ah cute I love it.

Your google search term here is *Remote Release".
I would also prepend the brand or model of your camera like
“Cannon EOS MkII Remote Release wireless”

If you end up pursuing this, I wanna see the photos you take.
Pix :heavy_heart_exclamation:

1 Like

As with so many clever ideas, There’s an XKCD for that


If you use a GPS module like this one to trigger the shutter…

Then you can get 1 second pulses (1 PPS), across multiple devices, no matter how far apart, with accuracy in the sub 10ns region. This makes it very easy the coordinate geographically separate systems.



Hi John,

It seems that there are some solutions forming here. Depending how much experience you have with electronics I’d start with Pix’s idea. It will be just as simple as a push of a button.

Mark’s idea is also brilliant, you won’t be limited by distance here and you could almost certainly expand upon it to add more cameras.

Looking forward to seeing how you go.

Thanks Mark. I use these GPS modules. Do you know if every module will issue the PPS pulse at the same precise instant or would each be at differing instances? This is discussed in forums but I don’t see an explicit answer. The following image from a scope is interesting. It show the PPS pulse and the NMEA sentences coming immediately after:

1 Like

The PPS pulse is accurate, well within the 0.1 sec you mention. The NMEA sentence timing cannot be used for timing purposes - it can fluctuate wildly both for any particular device depending on the configuration, and between different devices.

Still not clear to me if the PPS occurs at the exact same instant across all u-blox devices.

The pps “should” be on the exact second mark for all GPS units regardless of make/brand (some may do more then 1 pulse per second, so just read the data sheet for that gps)

The key reason for the difference between the PPS pulse and the actual time is due to the time it takes for the GPS unit to transmit all set NMEA packets to the listening device. The packet you want may be the 1st one or may be the last one, or somewhere in between. On RX from that satellite’s, the GPS will do all its internal math (e.g. position, clock etc); THEN format the needed NMEA packets and TX out the uart (or what ever interface). (some very rough math for example - all values just made up/est).

PPS = 0 time offset
GPS Internal = say 0.1 ms (made up time for the example)
Clock out all defined packets at 9600 bps = 176.5 ms
uC to decode and get time = 0.4ms
est difference between PPS and time packet = 0.1 + 176.5 + 0.4 = 177ms

You can improve this by
a) only sending the one NMEA packet with the time (most GPS units allow you to set what packets you want.
b) Faster UART bit rates.
the above 176.5 ms was for 6 NMEA packets, so if only the time (roughly) 30ms but can vary due to the actual number of bytes.

You can get close to a good time if you do something like
IRQ in pps signal
Get clock in ms (or smaller units if you can) - say PPSTime =
Wait for the time packet to arrive, decode and any other adjustments you may need.
Add to the decoded time {now (in same units) - PPSTime} (i.e. time difference between PPS and now, you should be very close to the actual time.

But for this project I was thinking about using the PPS slightly differently as it could be simpler.

  1. Have the master and all slaves on the same time (normal NMEA packet time) As above, this should be within 1 second, but can vary.
  2. Master sends command to take the photo at the next pps after hh:mm:ss (this time would need to be in the future and allow enough time for the packet to be sent and received.
  3. all units trigger capture at the next PPS after hh:mm:ss

in this way the PPS should be the exact time requested (which is hh:mm:ss + up to 1 second).

Hi All
All this seems well and good as long as the GPS device knows exactly when these starlings are flying around.

Or at the other end of things get the starlings to fly and do their thing when it is convenient for the GPS to send a time signal.

I believe what John wants to do is trigger the shutter of 2 cameras at exactly the same time.
I think what he is getting at is he needs:

1 single button to trigger both cameras.
The interface between THAT button and each camera will need to be exactly the same. That is if he needs some sort of radio link to one camera he will need exactly the same to the other one. The interface and the method of triggering the cameras needs to be EXACTLY the same. In other words two systems exactly the same .
I think John will have to do the above to prevent latency problems. You will have a delay in the any radio or indeed any electronics. While individually probably small they can add up. By providing two exact systems any differences will be minimised.

And don’t forget to get the starlings to fly within view of both cameras. One would be easy as it can be pointed but you will need a second person to point his (or her) camera at the same group of birds.

I suggest this as I am a great believer in the KISS principle and thought this thread was getting far too complicated.
Cheers Bob

The PPS is accurate. It is derived from the same source that is used to calculate the receiver position from the satellites. So if the position is being reported the PPS is providing the exact time.

Not necessarily. To avoid delays in the radio link (which might depend on intermediate devices) the shutter could be triggered by the button transmitting a GPS time that is few seconds in the future. If the shutters are then fired by the PPS corresponding to the GPS time sent from the transmitter then the timing will be accurate to within a few nanoseconds. So, for instance, one camera might be fired from the MCU that has the button and the transmitter and the other from the MCU that has the receiver. As long as the nominated GPS time was far enough into the future to cover the radio transmission delay then the cameras will be synchronized.

Hi Jeff
Yes these clocks in the satellites are pretty accurate. I think about a second in a couple of thousand years.
Then these pulses are going to be retrieved then sent via some hobby electronics that can have all sorts of variable delays. For instance have a look at the humble 555. Depending on what flavour of 555 the delay between trigger and output pulse can be 100mS or more. Every component will have a delay of varying proportions and quite possibly a substantial difference in different devices of the same type.

I think this has gone overboard a bit and the solution I described above might be a bit more realistic.
Cheers Bob

Depending on the setup there might be a delay between the detecting of the PPS and the triggering of the camera, but it will be the same for both devices, so the cameras will still be synchronized. The aim of the arrangement is to eliminate what is different between the two triggering mechanisms, not to somehow remove any variability in what is identical.

1 Like

Hi Jeff

Yes that would work and keep the 2 cameras in sync.
But I am looking at the practical side. Those “few seconds” could mean losing any spectacular effect that may be there. Those birds fly very fast and these (spectacular) clouds move over the sky very quickly and could be gone from the field of view in those seconds.

I think those who do this professionally or advanced hobby situation can wait for days for the right shot. Then rapidly take many frames in the hope of that one great result.
This earlier link posted a few days ago I think was time lapse photography and each frame superimposed to get these results I am not sure on this.

Whatever the method the results are pretty good.
I have a video produced by the ABC called “Australia, Land of Parrots” where the film crew waited over 3 weeks for Budgerigars to do their fly past. The results were well worth it. Great footage. The people involved knew this display would come but just not when.

I do hope that John gets some sort of result with this and would be very interested in how it is achieved.
Cheers Bob

I think there is a few things that need to be defined before a working solution can be fully recommended to the OP.

a) What is the skill level/requirement.
e.g. Are they happy to build something from scratch; build via connecting some models and writing some code; just want a solution off the shelve.
I will assume they are happy to build something as they are here.

b) The max time between shutter 1 and shutter 2 being triggered.
From post 1, “…say within 0.1 sec…” that would be <= 100ms

c) The max time between wanting to trigger and the shutter being actually trigged.

d) How far apart each camera (max) could be.
“…The cameras would be hundreds of meters apart…”
this is a big one as “free” radio space does have low power levels; thus harder to get long distances. But if everything has line of site, that would help.

Based on Bob’s comments, I like the idea of 3 nodes.
Node 1: Trigger Device
Node 2: Camera 1 trigger
Node 3: Camera 2 trigger

So that should make everything on Node 2 and 3 (reasonably) in sync; as both nodes 2 and 3 have to RX the trigger “command” via the same hardware. And assuming both cameras are the same, the process to trigger the shutter should take the same amount of time on each node. That should make things well within the 100ms.

This now leaves

  • what radio to use and its overhead.
  • how to trigger the shutter

Given what the target is (and as per Bob’s comments) getting the best photo may or may not be as simple in a single trigger. So we need to consider some more things at the camera level
a) Will it always just be a single shot
b) Does it need to support a timed exposure
c) Does it need to support rapid shots (camera dependent to some extent; eg burst mode)
d) Does it need to support taking X photos with Y time between shots. e.g. 10 photos 1 second apart

If this is just a single shot on each trigger, the the radio only needs to support something as simple as a carrier and detection of said carrier.
No decoding so farily fast.

But if there are a few things that need to be passed; then a structed packet will be needed.
e.g. command,option1,…,optionX

And the last thing in the back of my mind is interference rejection. By this I mean we are using shared radio space; as such there could be something else transmitted that could cause a false trigger. This is easy addressed by using some check sum and/or encryption at the expense of some time to process at each end; the more remote (away from humans) it is the less risk of this.

Just surfing around and found this setup, claims 500m, not sure if it can trigger 2 cameras though; have a read and see what you think.


The latency only needs to be enough to ensure the communication is sent and received, and it doesn’t have to be whole seconds. It will depend on the communication method that is selected, and that will depend on what delay OP can accept and the technologies that are available. Unfortunately a couple of hundred metres is probably a little too far for line-of-site laser.

1 Like

Hi Michael

What a useful device (if you are well into photography).

I just had a quick look and it seems as if it will provided everything is in range and on the same channel. I have not read through the documentation or watched the whole video though. I will have to leave that to John and he can make up his own mind.
To implement my idea of a single command and a slave on each camera he would need 3 of these I think. But I reckon this is worth a look. Up to John.
Cheers Bob

1 Like

Thanks everyone. Thrilled to get so much interest!

I’ll respond to Michaels’ queries to help narrow down the use case.

Skill level: happy to build from scratch, but nothing wrong with an off the shelf solution or part solution. I can program Arduinos and work with GPS. Haven’t worked with cameras at this level.

Time between camera shots: As Rob said, they move fast. Perhaps 10m in one second. ±100ms seems a good spec.

Max time b/w command and actual shot: don’t think it matters much, but I’d want <1second.

Distance b/w cameras: Say a max of 200m; they would be in line of sight of each other apart from vegetation. I think they need to be close to the flock, max of 200m say – have to liaise with the birds on that!

I like the idea of commanding both from one location. They could be set up to both start taking shots at the same preset time. I’m confident the birds flock at the same place and time each day.

I’d like a sequence of multiple shots rather than just one.

Images don’t need to be transmitted. That’d be great but taking it to another level of complicatedness.

Could two way radios be used to trigger just by pressing send or paging? I can see the risk of spurious triggering from other CB users.

This could get really complicated. I think it should start off as simple as possible and then build on it.

I can’t promise I’ll build this, but you’ve all satisfied me it could be done.