After learning from this you will be able to actively determine the speed of objects of any size (be it a vehicle, person, or matchbox car) via a live feed easily. You will even be able to use the object’s speed to control GPIO directly! Too fast and you can set off the alarms. Cameras in combination with artificial intelligence create arguably the most powerful sensor you can ever put on a Raspberry Pi and it has never been easier to try it out yourself.
As the object travels in front of it machine learning will take over to determine both the moving object and its speed (in km/h or mp/h). The Open-Source software compares two photos and (from some trigonometry and known distances) infers the speed of that object. Then it will take the second compared image, provide it with a timestamp, file location and stamp the speed of the object onto it, save it and upload it to a local network website. These photos can then be accessed through a website from any machine on your local network. This guide is taking a fully-fledged speed camera for monitoring cars and retrofitting it to all kinds of speed monitoring tasks.
Hi there! this is an awesome project and tutorial.
do you think this could also work or USB webcam connected to de raspi? or even external video streams like one from a remote IP cam? or maybe the lag / variation would make it break?
So long as you feed the data frame by frame into the system you are good to go. It will take some Pythonering to alter the code to make it run with a USB webcam but it is definitely doable. And it’ll take even more to set it up to receive External video streams whilst running the AI (I’d be concerned with overheating and maxing out the CPU of the Raspberry Pi).
I’m a real rookie/newbie in all of this, willing to do a speed tester for a low traffic street with only one lane, I’ll see if I can get any help from anyone that knows how to fiddle with python to see if I can add those lines you mention to make it work with a usb webcam!
in the case I get somwehere, do you have the code in git to fork it and / or PR it?
thanks again!
trying to use your code to enforce safe street suburbs!
Sounds like a really cool project and a perfect application for this script. The original Github you can locate here Github. It’s very cool watching it evolve as the community work together on it along with Claude.
Good luck with your safe streets mate definitely not easy.
Hello there. I have my USB Logitec 920C running as my speedcam which is currently giving me sporadic results for a two-way road ~ 100 meters from my house. I have the 720.py plugin running which gives amazing clarity to the images, however, my speed calibration is showing some cars moving by at the speed limit 40 kph and others at 177 kph! I was wondering if L2R and R2L need to be added to the 720.py file. This leads me to another question: when 720.py plugin has been selected does it override all identical settings within the speed-cam.py file? Confused.
Upon accessing the speed camera menu, I press enter on option B (webserver.py in background). However, this does not start the webserver. How can I keep the webserver on?
In this guide, I used Buster Raspberry Pi OS and I’d recommend you to do the same. I haven’t checked recently to see if this set-up would work with the newer ‘Bullseye’ Raspberry Pi OS, when it first came out I tried and it wasn’t successful. ‘Buster’ on the other hand is all good to go.
Just saw your above message too. Speed calibration does take a little bit of adjustment but I’m pretty sure the L2R and R2L calibration work no matter what resolution you decide on. A couple of precise 40km/h drives up and down past the camera will definitely make it easy to tune. Also, make sure the rough distance between the camera lens and the moving cars is pretty accurate. That is a very important dimension to get correct for the speed calculations, small variance in that can cause large changes in speed results.
So in that Speed Camera Menu, the options operate like a toggle switch. If everything has been installed correctly and with Option B toggled on you should be able to access the Webserver browser (even after you have closed the Speed Camera Menu). Hopefully that helps, if you need anything else to get it all running proper come pop me another message .
What is the advantage of using Docker? When I installed it previously the speed-camera webserver would grab the docker IP (completely different subnet [172.17.0.0/24]) which would not allow any access to the webserver from any PC on my subnet (10.x.x.0/24). I don’t want to use VNC to view the webserver and I would prefer not adding 172.17.0.0/24 to my router. I would like to access the webserver from all my PC’s within my home network. Any advice would be much appreciated.
Docker will let you run multiple things on a single Raspberry Pi, along with a whole bunch more. For this system, because it is particularly CPU intensive, I would recommend not using it. But those cleverer than me could probably get the Raspberry Pi to multi-task and still identify/record accurate speed data.
VNC is definitely an okay option to gain the data. It would be hard to get the information out of the Raspberry Pi without connecting it to the local network in some manner, as soon as the Pi is on your local network it will have an IP address to allow you to access it.
Yeah, I decided against downloading Docker. Thank you for your response. The Docker IP (former Raspi incarnation) kept being pulled by the webserver which made things impossible to manage. I now have a new incarnation of Raspi (Buster) w/o Docker and all is good in the world.
I now have this issue. I am using Arducam Lens for Raspberry Pi HQ Camera, Wide Angle CS-Mount Lens, 6mm Focal Length with Manual Focus and Adjustable Aperture with Buster. I cannot zoom with this lense, however the red hatched lines and box are way too high when I center the road (see attachment). Do the lines need to be over the road to capture the moving vehicles? I would much rather have the road in the center of the frame. Also, with the image (see attachment) no cars are being captures, however, the bird feeder (left side of screen) is being measured. Any ideas would be greatly appreciated.
Heyya mate you can adjust the height and location of the red hatched lines inside the Python scripts. Pretty sure it’s the | config.py | file that you can do this in. Definitely advantageous to have the lines be over the top of the street you want to measure as only the red lines sections will be speed measured. Once that is all set up and calibrated correctly you can then remove the red lines from the preview. Check here for the wiki to this project.
Were you able to find success in adjusting where the red hatched lines were as well as change the box location? I can see in the config file how to change the size of the box but not the location. Cheers!
Inside your downloaded files for the speed-camera there will be a Python file named | config.py |. If you scroll down to the section named | # Motion Tracking Window Crop Area Settings | you’ll find the exact settings your looking for
If you adjust any of the values for | x_left |, | x_right |, | y_upper |, or | y_lower | you’ll be able to move the tracking box area to any size or location you desire. Hopefully that helps! See below for an image of where these settings are in the Python script.