Preview Mode Links will not work in preview mode

Feb 1, 2018

An Australian developer thought his local police force was spending way too much money on their new license plate scanning system. So he decided to build one himself. Here's how he did this, and how he ended up catching a criminal.

Written and read by Tait Brown: https://twitter.com/taitems

Original article: https://fcc.im/2iJWWuE

Learn to code for free at: https://www.freecodecamp.org

Intro music by Vangough: https://fcc.im/2APOG02

Transcript:

The Victoria Police are the primary law enforcement agency of Victoria, Australia. With over 16,000 vehicles stolen in Victoria this past year — at a cost of about $170 million — the police department is experimenting with a variety of technology-driven solutions to crackdown on car theft. They call this system BlueNet.

To help prevent fraudulent sales of stolen vehicles, there is already a VicRoads web-based service for checking the status of vehicle registrations. The department has also invested in a stationary license plate scanner — a fixed tripod camera which scans passing traffic to automatically identify stolen vehicles.

Don’t ask me why, but one afternoon I had the desire to prototype a vehicle-mounted license plate scanner that would automatically notify you if a vehicle had been stolen or was unregistered. Understanding that these individual components existed, I wondered how difficult it would be to wire them together.

But it was after a bit of googling that I discovered the Victoria Police had recently undergone a trial of a similar device, and the estimated cost of roll out was somewhere in the vicinity of $86,000,000. One astute commenter pointed out that the $86M cost to fit out 220 vehicles comes in at a rather thirsty $390,909 per vehicle.

Surely we can do a bit better than that.

The Success Criteria

Before getting started, I outlined a few key requirements for product design.

Requirement #1: The image processing must be performed locally

Streaming live video to a central processing warehouse seemed the least efficient approach to solving this problem. Besides the whopping bill for data traffic, you’re also introducing network latency into a process which may already be quite slow.

Although a centralized machine learning algorithm is only going to get more accurate over time, I wanted to learn if an local on-device implementation would be “good enough”.

Requirement #2: It must work with low quality images

Since I don’t have a Raspberry Pi camera or USB webcam, so I’ll be using dashcam footage — it’s readily available and an ideal source of sample data. As an added bonus, dashcam video represents the overall quality of footage you’d expect from vehicle mounted cameras.

Requirement #3: It needs to be built using open source technology

Relying upon a proprietary software means you’ll get stung every time you request a change or enhancement — and the stinging will continue for every request made thereafter. Using open source technology is a no-brainer.

My solution

At a high level, my solution takes an image from a dashcam video, pumps it through an open source license plate recognition system installed locally on the device, queries the registration check service, and then returns the results for display.

The data returned to the device installed in the law enforcement vehicle includes the vehicle’s make and model (which it only uses to verify whether the plates have been stolen), the registration status, and any notifications of the vehicle being reported stolen.

If that sounds rather simple, it’s because it really is. For example, the image processing can all be handled by the openalpr library.

This is really all that’s involved to recognize the characters on a license plate:

A Minor Caveat

Public access to the VicRoads APIs is not available, so license plate checks occur via web scraping for this prototype. While generally frowned upon — this is a proof of concept and I’m not slamming anyone’s servers.

Results

I must say I was pleasantly surprised.

I expected the open source license plate recognition to be pretty rubbish. Additionally, the image recognition algorithms are probably not optimised for Australian license plates.

The solution was able to recognise license plates in a wide field of view.

Annotations added for effect. Number plate identified despite reflections and lens distortion.

Although, the solution would occasionally have issues with particular letters.

A few frames later, the M is correctly identified and at a higher confidence rating.

As you can see in the above two images, processing the image a couple of frames later jumped from a confidence rating of 87% to a hair over 91%.

I’m confident, pardon the pun, that the accuracy could be improved by increasing the sample rate, and then sorting by the highest confidence rating. Alternatively a threshold could be set that only accepts a confidence of greater than 90% before going on to validate the registration number.

Those are very straight forward code-first fixes, and don’t preclude the training of the license plate recognition software with a local data set.

The $86,000,000 Question

To be fair, I have absolutely no clue what the $86M figure includes — nor can I speak to the accuracy of my open source tool with no localized training vs. the pilot BlueNet system.

I would expect part of that budget includes the replacement of several legacy databases and software applications to support the high frequency, low latency querying of license plates several times per second, per vehicle.

On the other hand, the cost of ~$391k per vehicle seems pretty rich — especially if the BlueNet isn’t particularly accurate and there are no large scale IT projects to decommission or upgrade dependent systems.

Future Applications

While it’s easy to get caught up in the Orwellian nature of an “always on” network of license plate snitchers, there are many positive applications of this technology. Imagine a passive system scanning fellow motorists for an abductors car that automatically alerts authorities and family members to their current location and direction.

Teslas vehicles are already brimming with cameras and sensors with the ability to receive OTA updates — imagine turning these into a fleet of virtual good samaritans. Ubers and Lyft drivers could also be outfitted with these devices to dramatically increase the coverage area.

Using open source technology and existing components, it seems possible to offer a solution that provides a much higher rate of return — for an investment much less than $86M.

Remember the $86 million license plate scanner I replicated? I caught someone with it.

A few weeks ago, I published what I thought at the time was a fairly innocuous article: How I replicated an $86 million project in 57 lines of code.

I’ll admit — it was a rather click-bait claim. I was essentially saying that I’d reproduced the same license plate scanning and validating technology that the police in Victoria, Australia had just paid $86 million for.

Since then, the reactions have been overwhelming. My article received over 100,000 hits in the first day, and at last glance sits somewhere around 450,000. I’ve been invited to speak on local radio talk shows and at a conference in California. I think someone may have misread Victoria, AU as Victoria, BC.

Although I politely declined these offers, I have met for coffee with various local developers and big name firms alike. It’s been incredibly exciting.

Most readers saw it for what it was: a proof of concept to spark discussion about the use of open source technology, government spending, and one man’s desire to build cool stuff from his couch.

Pedants have pointed out the lack of training, support, and usual enterprise IT cost padders, but it’s not worth anyone’s time exploring these. I’d rather spend this post looking at my results and how others can go about shoring up their own accuracy.

Before we get too deep into the results, I’d like to go over one thing that I feel was lost in the original post. The concept for this project started completely separate from the $86 million BlueNet project. It was by no means an attempt to knock it off.

It started with the nagging thought that since OpenCV exists and the VicRoads website has license plate checks, there must be a way to combine the two or use something better.

It was only when I began my write-up that I stumbled upon BlueNet. While discovering BlueNet and its price tag gave me a great editorial angle, with the code already written. There were bound to be some inconsistencies between the projects.

I also believe part of the reason this blew up was the convenient timing of a report on wasteful government IT spending in Australia. The Federal Government’s IT bill has shot up from $5.9 billion to $10 billion, and it delivered dubious value for that blow out. Media researchers who contacted me were quick to link the two, but this is not something I am quick to encourage.

A Disclaimer

In the spirit of transparency, I must declare something that was also missing from the original post. My previous employer delivered smaller (less than $1 million) IT projects for Victoria Police and other state bodies. As a result, I’ve undergone police checks and completed the forms required to become a VicPol contractor.

This may imply I have an axe to grind or have some specific insider knowledge, but instead I am proud of the projects we delivered. They were both on time and on budget.

Visualizing the Results

The following is a video representation of my results, composited in After Effects for a bit of fun. I recorded various test footage, and this was the most successful clip.

I will go into detail about ideal camera setups, detection regions, and more after the video. It will help you better understand what made this iPhone video I took from through the windscreen a better video than a Contour HD angled out the side window.

An Ethical Dilemma

If you saw the hero graphic of this article or watched the video above, you may have noticed a very interesting development: I caught someone.

Specifically, I caught someone driving a vehicle with a canceled registration from 2016. This could have happened for many reasons, the most innocent of which is a dodgy resale practice.

Occasionally, when the private sale of a vehicle is not done by the book, the buyer and seller may not complete an official transfer of registration. This saves the buyer hundreds of dollars, but the vehicle is still registered to the seller. It’s not unheard of for a seller to then cancel the registration and receive an ad hoc refund of remaining months, also worth hundreds of dollars.

Alternatively, the driver of the vehicle could well be the criminal we suspect that they are.

So, although I jokingly named the project plate-snitch when I set it up on my computer, I’m now faced with the conundrum of whether to report what I saw.

Ultimately, the driver was detected using a prototype of a police-only device. But driving on a 2016 registration (canceled, not expired) is a very deliberate move. Hmm.

Back to the Results

Of the many reactions to my article, a significant amount were quite literal and dubious. Since I said I replicated the software, they asserted that I must have a support center, warranties, and training manuals. One even attempted to replicate my results and hit the inevitable roadblocks of image quality and source material.

Because of this, some implied that I cherry-picked my source images. To that I can only say, “Well, duh.”

When I built my initial proof of concept (again, focusing on validating an idea, not replicating BlueNet), I used a small sample set of less than ten images. Since camera setup is one of, if not the most, important factors in ALPR, I selected them for ideal characteristics that enhance recognition.

At the end of the day, it is very simple to take a fragile proof of concept and break it. The true innovation and challenge comes from taking a proof of concept, and making it work. Throughout my professional career, many senior developers have told me that things can’t be done or at least can’t be done in a timely manner. Sometimes they were right. Often, they were just risk averse.

“Nothing is impossible until it is proven to be.”

Many people bastardize this quote, and you may have seen or heard one of it’s incarnations before. To me, it neatly summarizes a healthy development mindset, in which spiking and validating ideas is almost mandatory to understanding them.

Optimal ALPR Camera Setups

This project is so exciting and different for me because it has a clear success metric — whether the software recognizes the plate. This can only happen with a combination of hardware, software, and networking solutions. After posting my original article, people who sell ALPR cameras quickly offered advice.

Optical Zoom

The most obvious solution in hindsight is the use of an optical zoom. Though I explore other important factors below, none lead to such a sheer increase in recognition as this. In general, professional ALPR solutions are offset at an angle, trained on where the license plate will be, and zoomed into the area to maximize clarity.

This means the more zoom, more pixels to play with.

All the cameras I had at my disposal were of a fixed lens. They included:

A Contour HD action camera. These came out in 2009, and I use mine to record my cycling commute and to replay each week’s near death experience.

The featured test run was recorded on my phone. My only method of replicating an optical zoom was using an app to record at 3K instead of 1080p, and then digitally zooming and cropping. Again, more pixels to play with.

Angle & Positioning

The viewing angle of 30° is often referenced as the standard for ideal plate recognition. This is incredibly important when you learn that BlueNet uses an array of cameras. It also makes sense when you consider what a front facing camera would generally see — not very much.

What a front facing ALPR camera sees — not much.

If I had to guess I’d say a mostly forward-facing array would be the ideal setup. It would consist of a single camera pointed dead center as above, two off-center at 30° each side, and a single rear-facing camera. The value in having most of the cameras pointed forward would come from the increased reaction time if the vehicle is traveling in the opposite direction. This would allow a quicker scan, process, and U-turn than if the rear facing cameras picked up a suspect vehicle already ten meters past the police vehicle.

A four camera array would need to be angled similar to this. Icons from Freepik.

A Gymbal

When compositing the video, I thought about stabilizing the footage. Instead I opted to show the bumpy ride for what it was. What you saw was me holding my phone near the windscreen while my wife drove. Check out that rigorous scientific method.

Any production-ready version of a vehicle-mounted ALPR needs some form of stabilisation. Not a hand.

Frame Rate

Both the attempt to replicate my project and my recordings since then explored the same misconception that ALPR sampling frame rate may be linked to success. In my experience, this did nothing but waste cycles. Instead, what is incredibly important is the shutter speed creating clean, crisp footage that feeds well into the algorithm.

But I was also testing fairly low-speed footage. At most, two vehicles passing each other in a 60km/h zone created a 120km/h differential. BlueNet, on the other hand, can work up to an alleged 200km/h.

As a way of solving this, a colleague suggested object detection and out-of-band processing. Identify a vehicle and draw a bounding box. Wait for it to come into the ideal recognition angle and zoom. Then shoot a burst of photos for asynchronous processing.

I looked into using OpenCV (node-opencv) for object recognition, but I found something simpler like face detection, taking anywhere from 600–800ms. Not only less than ideal for my use, but pretty poor in general.

Hype-train TensorFlow comes to the rescue. Able to run on-device, there are examples of projects identifying multiple vehicles per frame at an astounding 27.7fps. This version could even expose speed estimations. Legally worthless, but perhaps useful in every day policing (no fps benchmark in readme).

To better explain how high-performance vehicle recognition could couple with slower ALPR techniques, I created another video in After Effects. I imagine that the two working hand-in-hand would look something like this:

Idea: how vehicle object detection could remove ALPR frame limits by processing asynchronously.

Frame Rate vs Shutter Speed

A different manifestation of frame rate is largely influenced upon shutter speed, and more specifically, the rolling shutter issues that plague early or low end digital movie recorders. The following is a snapshot from some Contour HD footage. You can see at only 60km/h the rolling shutter issue makes the footage more or less unusable from an ALPR point of view.

Adjusting frame rate on both the Contour HD and my iPhone did not result in noticeably less distortion. In theory, a higher shutter speed should produce clearer and crisper images. They’d become increasingly important if you were to chase the 200km/h BlueNet benchmark. Less blur and less rolling shutter distortion would ideally lead to a better read.

Open ALPR Version

One of the more interesting discoveries was that the node-openalpr version I was using is both out-of-date and not nearly as powerful as their proprietary solution. While an open source requirement was certainly a factor, it was amazing how accurately the cloud version could successfully read frames that I couldn’t even identify a plate on.

ALPR Country Training Data

I also found that the main node-openalpr package defaults to US country processing with no way of overriding it. You have to pull down someone else’s fork which allows you to then provide an extra country parameter.

Slimline Australian plates need their own separate country detection to regular Australian plates?

But this doesn’t always help. Using the default US algorithm I was able to produce the most results. Specifying the Australian data set actually halved the number of successful plate reads, and it only managed to find one or two that the US algorithm couldn’t. Providing the separate “Australian Wide Plate” set again halved the count and introduced a single extra plate.

There is clearly a lot to be desired when it comes to Australian-based data sets for ALPR, and I think that the sheer number of plate styles available in Victoria is a contributing factor.

Good luck with that.

Planar Warps

Open ALPR comes with one particular tool to reduce the impact of distortion from both the camera angle and rolling shutter issues. Planar warp refers to a method in which coordinates are passed to the library to skew, translate, and rotate an image until it closely resembles a straight-on plate.

In my limited testing experience, I wasn’t able to find a planar warp that worked at all speeds. When you consider rolling shutter, it makes sense that the distortion grows relative to vehicle speed. I would imagine feeding accelerometer or GPS speed data as a coefficient might work. Or, you know, get a camera that isn’t completely rubbish.

What others are doing in the industry

Numerous readers reached out after the last post to share their own experiences and ideas. Perhaps one of the more interesting solutions shared with me was by Auror in New Zealand.

They employ fixed ALPR cameras in petrol stations to report on people stealing petrol. This in itself is not particularly new and revolutionary. But when coupled with their network, they can automatically raise an alert when known offenders have returned, or are targeting petrol stations in the area.

Independent developers in Israel, South Africa, and Argentina have shown interest in building their own hacked-together versions of BlueNet. Some will probably fare better than others, as places like Israel use a seven digit license plates with no alphabet characters.

Key Takeaways

There is simply too much that I’ve learned in the last few weeks of dabbling to fit into one post. While there have been plenty of detractors, I really do appreciate the support and knowledge that has been sent my way.

There are a lot of challenges you will face in trying to build your own ALPR solution, but thankfully a lot of them are solved problems.

To put things in perspective, I’m a designer and front end developer. I’ve spent about ten hours now on footage and code, another eight on video production, and at least another ten on write-ups alone. I’ve achieved what I have by standing on the shoulders of giants. I’m installing libraries built by intelligent people and have leveraged advice from people who sell these cameras for a living.

The $86 million question still remains — if you can build a half-arsed solution that does an okay job by standing on the shoulders of giants, how much more money should you pour in to do a really really good job?

My solution is not even in the same solar system as the 99.999% accurate scanner that some internet commenters seem to expect. But then again, BlueNet only has to meet a 95% accuracy target.

So if $1 million gets you to 80% accuracy, and maybe $10 million gets you to 90% accuracy — when do you stop spending? Furthermore, considering that the technology has proven commercial applications here in Oceania, how much more taxpayer money should be poured into a proprietary, close-sourced solution when local startups could benefit? Australia is supposed to be an “innovation nation” after all.