Raspbery Pi has a camera module attachment. I understand that this is for a 5Mpixel camera and can take single images or video.
I need to know if the frames can be synchronized to a high degree and if so how to do it.
Let explain. I will have several such camera-pi units which are required to be synchronized. I wish to issue a command from say a hardware interrupt which will be wired to both systems. I want to receive an image from both systems which are taken at exactly the same time so that when I take a picture of a moving object the exposure instant will be the same for both cameras. I will want to be able to take such exposures at intervals of perhaps up to 50 Hz.
Is it possible to set the camera up such that only a portion of the image is loaded to the Pi so that the image will load more quickly. I understand that the download speed of the camera for a full resolution image is about 15 Hz. I do not want to scale the full image into a smaller pixel format, I want to define an image area of the full screen image and download only that. Is this possible? I would want to write a program in compiled language like C so that it runs fast.
I intend sending the received image to another processing system using the high speed 1G Ethernet port.
Also is it possible that I could trigger a video so that the exposure time of the first frame is exactly controllable, like a hardware interrupt. I do not want the camera free running at so many frames per second and when I issue a capture it starts the video with the next frame. If this is the case several cameras would not be able to be synchronized so that heir exposure windows correspond in time.
Whlie some wild ideas come to mind, I don’t think that’s going to be possible. The only way to improve the sharpness is to increase the resolution (more megapixels) while also keeping/imporving the quality of sensor data (aka, use a different camera with better features).
You may find that using a DSLR with your Raspberry Pi would be easier (there’s no escaping the speed-hit of large files though).
I’ve found a few modifications that other users have made on these cameras to be able to change the settings regarding the exposure time and the FPS of the cameras for your application. As Graham said that will be a difficult process to try and implement on the Pi and it may not be possible. All the best with it!
I should add some more information:
There are several issues here,
The principal one being camera exposure synchronization, I will explain further:
Imagine I have several Pi3b+ connected to an Ethernet switch where each Pi has a camera connected to it. I would want to send a UDP broadcast to all PI’s to capture one frame from their cameras. Alternatively I could have a hardware interrupt going to all PI’s so that the hardware interrupt triggers the capture capture image - this would only be done if the Ethernet solution had too much random latency.
When the sync packet ( capture image) is received by the Pi the camera must BEGIN the exposure instantly!. That is it must not be free running in the background and just deliver the last exposed image - rather it must begin a new exposure the instant it receives the command. My question is - does it do this? OR does it just give the last image which it has exposed which could be any fraction of its idling frame rate. If I can be certain that if all cameras are set up the same, and when the capture command is sent to the camera, the camera will immediately, no matter what its previous state, will begin a new exposure with no delay or with very deterministic delay, ( all cameras the same) begin the new exposure and then send the image data to the Pi, then if all cameras are set up the same I can expect this to work. Note that it is essential that all cameras can be induced to take an exposure on command within about a millisecond of each other, AND within a millisecond or constant deterministic delay of that order from a capture command. This is the first major question.
I understand that the image can be loaded to memory in the Pi RAM rather than having to display on the screen, - yes?
Second is that I note that the camera can be set up to select a X,Y offset into the raw camera image and a portion of the image be selected. I understand that this offset and window is sent to the camera and so when a capture is requested, instead of the whole image being downloaded, only the relevant portion of the image is downloaded top the Pi. This being the case the time it takes to download the sub image can be significantly less than downloading the whole image. In this way it should be possible to defectively get a higher frame rate. That is to say that instead of being limited to about 15 frames per second in the full image, now with only one quarter of the image being downloaded I should be able to achieve roughly 60 frames per second. Is this the case or does the camera download the whole image and the program on the pi doe the windowing? I would expect that the camera would only download the window portion, but I do not know and this is essential to know.
I am expecting to write some software in C to run on all PI’s with cameras so that it receives a UDP capture instruction, and sends back over the Ethernet port a packet of the captured image. The latency between the UDP capture packet and the exposure window of each camera must be deterministic to within about a milisecond - microseconds would be better. All Pi’s will be running on the same Ethernet switch and the capture command would be a broadcast packet.
What you want to do is technically possible, but practically very difficult. ‘Exact timing’ is impossible - you need to decide how much error is acceptable to still achieve your goal 1ms, 10us, 5ns? The tighter the timing needs to be, the harder it will be to achieve - the difficulty and expense will be asymptotic to perfect precision.
The other issue you’ll run into is that camera units are not designed to be used this way. Each one is designed to be a completely independent unit, and it sounds like you’re going to need to throw away the existing control hardware for the light sensor and build your own for this very niche application. You’re going to have to dive a lot deeper than just coding in C to achieve your goal.
The other way around this is to just have a frame rate on each separate camera so high that there will always be exposures sufficiently close together for your purpose - this is almost certainly the technically more achievable approach as there are plenty of high speed cameras available on the market particularly since your end goal frame rate is only 50fps - depending on your budget for this project. This is the approach they used when filming those cool shots in the now famous Matrix bullet time sequences.
I have to use these low cost cameras as I do not have the budget to do a more expensive solution.
I still do not have an answer to this question and I need an answer:
When I set up the camera to initialize it, does it take an exposure from that instant? Yes or no?
I cannot find this information in the data - that is why I am asking.
related question is:
if all cameras are initialized at the same time regardless of their previous state, will the cameras then be synchronized in exposure epoch - yes or no? and if not why not?
Regards
Clem
And you can get our latest projects and tips straight away by following us on: