This article, the first of the series, will briefly introduce you to the technology and the tools that we’ll use to achieve our goal. It will also explain how to acquire datasets and get all the components to work together. Finally, it will discuss the outcome you should expect from this project.
The Internet is full of tutorials about detecting objects in images. But can you apply the same concepts to a live video stream?
Yes, you can! In this article series, we’ll demonstrate how to use AI to determine what’s going on in a live video stream by building a lightning detector that runs in realtime on an Android device.
The title of the series – "Live Lightning Detection ..." – sounds too emotionally charged. Poetry? Cinema? Or a marketing trick? Let’s parse the "emotion" into technical terms. Live = in a set of consecutive frames that constitute video; Lightning Detection = nothing but an object detection, using Deep Learning (DL) and Tensorflow (TF) implemented in an app running on the Android OS. Sounds more up your alley? Then let’s start.
The term "Artificial Intelligence" essentially stands for the natural (human) intelligence implanted into a machine. How can we make a machine think like a human? We can train our machines to be as efficient as the human brain – to learn and adopt things – using a neural network as part of Machine Learning (ML). ML is capable of learning by itself from unsupervised, unstructured, non-labeled data. DL is a subset of ML. It does most of the things in the same way as ML, using artificial neural networks in a deep hierarchy of data.
"There is nothing like Artificial Intelligence, it’s just a way of showing, how far, a computer can be the smartest." – Stephen Gary (Steve) Wozniak (Woz).
To reach our goal – detect live lightning in a video – we have to make a DL model "think" like a human brain to learn, adapt, execute, and repeat. This, in turn, required that we train our model on good data - in this case, we want clear, non-blurry images of lightning. Let’s start by collecting the images required to train our model. I did a simple Google search to get some 300 images to start with. You can create your own dataset this way. Alternatively, you can reference an available dataset from any open source or paid platform – such as this one, for example.
We’ll train our model on the assembled dataset using Teachable Machine. Have a look at the tutorial to see how the training is done. There is also a video that might help you along.
If you are comfortable with the TF framework, you are welcome to create your own model using a neural network with Python.
In this project, we are targeting the Android OS to showcase our DL-trained TF model converted to TFLite. We’ll use Android Studio to develop our app, which will host or reference all the necessary libraries.
Download Android Studio. If you see errors when setting up the IDE, there are plenty of online resources you can turn to for guidance, such as discussion forums and the Android Dev sub-Reddit.
As a result of this project, we’ll have a model capable of detecting lightning in the live stream from a camera. You will see how DL works, practically in real-time, based on the concept of neural network for images. You will also learn how to develop an Android app for showcasing the TF model operation.
In the next article, we’ll go over the steps required to train and export our model. Stay tuned!