In this first article in the series we'll discussed why a traffic speed detector built around commodity hardware would be a useful alternative to high-cost proprietary RADAR and LIDAR speed cameras.
Traffic speed detection is big business. Municipalities around the world use it to deter speeders and generate revenue via speeding tickets. But the conventional speed detectors, typically based on RADAR or LIDAR, are very expensive.
This article series will show you how to build a reasonably accurate traffic speed detector using nothing but Deep Learning, and run it on an edge device like a Raspberry Pi.
You are welcome to download code for this series from the TrafficCV Git repository. TrafficCV is a cross-platform Python program that lets you run different computer vision models on pre-recorded videos, or on live traffic camera feeds, and overlay the traffic information computed in real-time.
We are assuming that you are familiar with Python but that’s all you need since we’ll go through the details of installing and using the other libraries used in TrafficCV.
How Bad Is Speeding Anyway?
Road traffic injuries are the leading cause of death by injury worldwide, with nearly 1.35 million people dying every year due to road crashes and collisions. In February 2020, the WHO published some interesting statistics on the global impact of road traffic crashes:
- Road traffic crashes cost most countries 3% of their gross domestic product
- More than half of all road traffic deaths are among vulnerable road users: pedestrians, cyclists, and motorcyclists
- 93% of the world's fatalities on the roads occur in low- and middle-income countries, even though these countries have approximately 60% of the world's vehicles
- Road traffic injuries are the leading cause of death for children and young adults aged 5-29 years
Speeding is a key factor in road traffic crashes. In the U.S., the NHTSA reported almost 9,400 speeding-related deaths in 2018. Between 25% to 33% of all motor fatalities in the U.S. for the past two decades involved speeding and speed-related crashes. These were responsible for an estimated $52 billion in economic losses nationwide.
How is the Speeding Detected?
Automated Speed Enforcement (ASE) refers to the use of cameras and speed detectors in an autonomous system that monitors and records vehicle speeds and positions in particular zones or at red lights. This is done in order to enforce laws regarding speed limits and to alter driver behaviour to reduce speeds and increase road safety. In an ASE system, when a vehicle’s speed is detected to be excessive, a camera takes photos of the vehicle and license plate, time, date, location, speed, and – if the relevant laws require it – the driver.
Modern ASE systems use either Radio Detection and Ranging (RADAR) or Laser Image Detection and Ranging (LIDAR) to detect the speed of a vehicle as it crosses a monitored zone, and one or more cameras that record an image of the vehicle including the required details. LIDAR systems have a much lower beam divergence, and can target a specific vehicle without triggering detectors in other cars, while RADAR systems have the ability to simultaneously monitor several vehicles spread over a large area.
ASE solutions are available from multiple vendors in a multitude of form-factors, from mobile roadside single-car speed detectors with LED displays to larger column-mounted units capable of detecting several cars at once.
The safety effects of ASEs on speeding have been studied for many years:
All 13 reviewed studies on automated speed enforcement reported statistically significant reductions in crashes following the introduction of automated speed enforcement. The most robust evaluations reviewed were of fixed, conspicuous camera enforcement programs used to treat specific high crash spots or lengths of roadway, ranging in length from 0.31 mi (500 m) to 3.2 mi (5.2 km).
How Much Does the Detection Cost?
In a nutshell: a lot. The costs of ASEs are usually broken down into a fixed initial capital cost and a recurring operating cost. The initial cost of the 2014-16 NYC DOT program for 140 cameras was $26.5 million, or almost $186,000 per camera. The operating costs for the two years were almost $43 million. The operating costs are dominated by the number of full-time employees responsible for operating and maintaining the system, as well as the costs of generating and mailing the citations for excessive speed to vehicle operators.
Handheld and dash-mounted speed guns used by LEOs are relatively inexpensive. However, a speed camera that combines a RADAR or LIDAR detector with an autonomous camera unit and requires a full-time employee to administer and maintain it can have a total initial cost of $50,000 – $90,000.
An inexpensive alternative to specialized, proprietary RADAR or LIDAR based systems, would be a solution that used off-the-shelf hardware and components, as well as open-source operating system and application software.
In this series of articles we’ll examine building traffic speed detectors using open-source image processing and neural network models on commodity edge devices and components. We will use the Raspberry Pi 4 single-board computer (SBC) along with the ArduCam variable-focus 5MP camera and an M12 lens mount, and the Coral USB hardware accelerator for accelerating neural network inference on TensorFlow Lite models.
Our operating system will be the latest version of Raspberry Pi OS, and our major application frameworks will be OpenCV and TensorFlow. We’ll develop the cross-platform Python software on a Windows 10 machine using Visual Studio Code—though you can also use Linux or MacOS if you prefer—which connects to the Pi remotely and runs an X server where graphical output from the Pi is displayed. This allows us to use our traditional desktop machine for most of the development, and the Pi system for testing and deployment.
In the next article, we’ll go through installation of the
operating system on the Pi, securing it, and configuring it for remote access
over WiFi. Stay tuned!