So far in the series, we learned how to detect people and find bounding boxes to indicate people's locations. In general, you can estimate distances using the closest vertices of the bounding boxes, but to keep things simple, I use the center of the bounding boxes, as shown in the figure below. I then calculate the space between them using the Euclidean formula for a plane.
Calculating the Center of a Rectangle
Remember that our application returns the list of detected objects. Each element in that list provides the label, rectangle (bounding box), and recognition score. Here, we use the rectangle. Two parameters represent it: top-left corner and bottom-right corner. Both of these are made up of an x and y coordinate in a plane.
To calculate the rectangle's center, we calculate its width and height and then divide each by 2. I implemented this functionality within
get_rectangle_center of the
DistanceAnalyzer class (see distance_analyzer.py file in the Part_06 folder):
top_left_corner = rectangle
bottom_right_corner = rectangle
width = bottom_right_corner - top_left_corner
height = bottom_right_corner - top_left_corner
center = (int(width/2 + top_left_corner), int(height/2 + top_left_corner))
As explained above, the function recovers the top left corner and bottom right corner and then performs calculations. Given the
get_rectangle_center function, I added another one,
get_rectangle_centers, that iterates over the list of detection results:
rectangle_centers = 
for i in range(len(detection_results)):
rectangle = detection_results[i]['rectangle']
center = DistanceAnalyzer.get_rectangle_center(rectangle)
To ensure that centers are calculated correctly, I will use OpenCV to draw those locations on the video sequence frame.
Displaying the Centers of the Bounding Boxes
Given the rectangle centers list, I can draw them on the image using the circle function from OpenCV. This function works similarly to a rectangle in that it accepts as parameters the input image, the circle center, thickness, color, etc.
Here is the example of using the function to draw a yellow circle with a radius of 15 pixels (see common.py from Part_03). I set the thickness to -1 to fill the circle:
def draw_rectangle_centers(image, rectangle_centers):
for i in range(len(rectangle_centers)):
The above function is implemented as the static method within the image_helper module (see Part_03/image_helper.py)
Putting Things Together
We are now ready to put everything together. We implement the main.py file as follows:
from inference import Inference as model
from image_helper import ImageHelper as imgHelper
from video_reader import VideoReader as videoReader
from distance_analyzer import DistanceAnalyzer as analyzer
if __name__ == "__main__":
model_file_path = '../Models/01_model.tflite'
labels_file_path = '../Models/02_labels.txt'
ai_model = model(model_file_path, labels_file_path)
video_file_path = '../Videos/01.mp4'
video_reader = videoReader(video_file_path)
score_threshold = 0.4
delay_between_frames = 5
frame = video_reader.read_next_frame()
if(frame is None):
results = ai_model.detect_people(frame, score_threshold)
rectangle_centers = analyzer.get_rectangle_centers(results)
imgHelper.display_image_with_detected_objects(frame, results, delay_between_frames)
After configuring the input paths for the modules developed earlier, we initialize the AI model and perform inference to detect people. Then, the resulting detections are passed to
get_rectangle_centers of the
DistanceAnalyzer class instance. Given the list of centers, we draw them on the frame from the video file (
draw_rectangle_centers) along with the bounding boxes and labels (
display_image_with_detected_objects). After running main.py, you will get the results shown above.
In this article, we learned how to calculate the center locations of the people detected in a video sequence. In the next article, we will use those centers to estimate distances between people and indicate people that are too close.
Dawid Borycki is a software engineer and biomedical researcher with extensive experience in Microsoft technologies. He has completed a broad range of challenging projects involving the development of software for device prototypes (mostly medical equipment), embedded device interfacing, and desktop and mobile programming. Borycki is an author of two Microsoft Press books: “Programming for Mixed Reality (2018)” and “Programming for the Internet of Things (2017).”