Monocular Navigation: Part II





0/5 (0 vote)
This is about local map building.
Introduction
In this part, I want to talk about some important algorithms in monocular navigation. They’re divided into four large categories: obstacle detection (OD), local map building (LMB), motion planning (MP), and additional functions (AF). In the first part, I wrote a little bit about obstacle detection algorithms. Here you will find the theory of local map building.
Local Map Building
After detecting obstacles, we have a black and white picture where a black pixel means obstacle and a white pixel means ground:
|
|
Here, I used the H-method for the detection of obstacles, because the ground is texturized. Now, how do we build a local map? As you can see, the first black pixel from the bottom is an obstacle for sure. Let's call such a line of first black pixels FBP. And, if we know some camera parameters, we can find the distance to this point. For getting the distance to the object by a captured picture, we need:
- Camera Height
- FOVy – vertical resolution of camera (in degrees)
- FOVx – horizontal resolution of camera (usually FOVy=FOVx)
- Camera angle (in degrees, how much the camera is reached over)
Here is the procedure for transforming screen coordinates into real coordinates:
procedure tNavigator.GetDistance(Scrx, Scry: integer; var X, Y: int);
Var fx,fy,v,omega,d,u:real;
begin
fy:=((maxHeight+1)/2)/tan(robotParams.FOVy/2);
fx:=((maxWidth+1)/2)/tan(robotParams.FOVx/2);
if ((maxHeight+1)/2)>ScrY then
begin
v:=(maxHeight+1)/2 - ScrY;
omega:=arctan(v/fy);
d:=RobotParams.CamHeight/tan(omega+RobotParams.CamAngle);
Y:=round(d);
u:=ScrX-((maxWidth+1)/2);
X:=round((u/fx)*d);
end
else
if (((maxHeight+1)/2)=ScrY) and (RobotParams.CamAngle<>0) then
begin
d:=RobotParams.CamHeight*tan(PI/2-RobotParams.CamAngle);
Y:=round(d);
u:=ScrX-((maxWidth+1)/2);
X:=round((u/fx)*d);
end
else
begin
v:=(maxHeight+1)/2 - ScrY;
omega:=arctan(v/fy);
if (omega+RobotParams.CamAngle)=0 then exit;
d:=RobotParams.CamHeight/tan(omega+RobotParams.CamAngle);
Y:=round(d);
u:=ScrX-((maxWidth+1)/2);
X:=round((u/fx)*d);
end;
end;
After transforming each pixel from FBP to real world coordinates, we can assign them to a Local Map Array and draw to the screen:
Drawing such a map, we should check if the pixels from the captured image are connected to each other. If they do, we should draw a line between them on the local map. This is because two nearby pixels on a captured image belong usually to one object in real world, but the GetDistance
procedure puts them to different places and they begin to stay very far from each other. But if we draw a line between them, we won’t break the real world picture. Also, it is necessary to erase any noise in the picture. For example, there are usually a lot of one- or two-pixel parts in a picture which stay alone and aren’t connected to other parts of it. Of course, they can be obstacles in the real world, but they are two small to prevent a robot’s moving. So, we can erase all separated 1-, 2-, 3-, 4- or even 5-pixel parts.
Conclusion
Now you know how to make a local map from a captured picture. In the next part, I will write about my robot “MTR-1” construction and the results of experiments with it.