BoofCV: Real Time Computer Vision in Java





5.00/5 (6 votes)
Introduction to and an example of how to use BoofCV
Introduction
BoofCV is a new real time computer vision library written in Java. Written from scratch for ease of use and high performance, it provides a range of functionality from low level image processing, wavelet denoising, to higher level 3D geometric vision. Released under an Apache license for both academic and commercial use. BoofCV's speed has been demonstrated in a couple of comparative studies against other popular computer vision libraries (link).
To demonstrate BoofCV's API, an example of how to associate point features between two images is shown below. Image association is a vital component in creating image mosaics, image stabilization, visual odometry, 3D structure estimation, and many other applications.
BoofCV's website contains numerious examples and several tutorials. You can run a multitude of Java Applets in your webbrowser to see its features before installing. BoofCV also has a youtube channel explaining different concepts and examples.
Website: http://boofcv.org
Version: Alpha v0.19
Date: September, 21 2015
Author: Peter Abeles
![]() |
![]() |
![]() |
Binary Image Processing | Image Registration and Model Fitting | Interest Point Detecting |
![]() |
![]() |
Camera Calibration | Stereo Vision |
![]() |
![]() |
Superpixels | Dense Optical Flow |
![]() |
![]() |
![]() |
Object Tracking | Visual Odometry | Color Histogram Image Retrieval |
![]() |
![]() |
|
Scene Classification | Background Modeling/Motion Detection | |
![]() |
||
Black Polygone Detector |
Video Tutorials and Demonstrations
Image Registration Example
BoofCV provides several different ways to register images. Most of them fall under the category of interest points. In this context, an interest point is a feature inside the image which can be easily and repeadily recognized between multiple images of the same scene from different points of view. If Java is set up in your browser, then you can see feature association in action by taking a look at this applet:
Feature Association: http://boofcv.org/index.php?title=Applet_Associate_Points
In the example, below the two images are registered to each other in several steps:
- Detect interest points
- Describe interest points
- Associate image features
In the block of code below the class is defined and several classes are passed in. These classes are abstract interfaces which allow several algorithms to be swapped in for each other. New ones can be easily added in the future. While not shown in this example, the un abstracted code is also easy to work with when high performance is required over easy of development.
public class ExampleAssociatePoints<T extends ImageSingleBand, TD extends TupleDesc> {
// algorithm used to detect and describe interest points
DetectDescribePoint detDesc;
// Associated descriptions together by minimizing an error metric
AssociateDescription associate;
// location of interest points
public List pointsA;
public List pointsB;
Class imageType;
public ExampleAssociatePoints(DetectDescribePoint detDesc,
AssociateDescription associate,
Class imageType) {
this.detDesc = detDesc;
this.associate = associate;
this.imageType = imageType;
}
Below is the meat of the code. Here two images are passed in they are:
- converted into image types that BoofCV can process,
- interest points are detect,
- descriptors extracted,
- features associated, and
- the results displayed.
All with a few lines of code. Note that T is a generic type, see the example code.
/**
* Detect and associate point features in the two images. Display the results.
*/
public void associate( BufferedImage imageA , BufferedImage imageB )
{
T inputA = ConvertBufferedImage.convertFromSingle(imageA, null, imageType);
T inputB = ConvertBufferedImage.convertFromSingle(imageB, null, imageType);
// stores the location of detected interest points
pointsA = new ArrayList();
pointsB = new ArrayList();
// stores the description of detected interest points
FastQueue descA = UtilFeature.createQueue(detDesc,100);
FastQueue descB = UtilFeature.createQueue(detDesc,100);
// describe each image using interest points
describeImage(inputA,pointsA,descA);
describeImage(inputB,pointsB,descB);
// Associate features between the two images
associate.setSource(descA);
associate.setDestination(descB);
associate.associate();
// display the results
AssociationPanel panel = new AssociationPanel(20);
panel.setAssociation(pointsA,pointsB,associate.getMatches());
panel.setImages(imageA,imageB);
ShowImages.showWindow(panel,"Associated Features");
}
Both images are described using a set of feature descriptors. For each detected interest point a feature descriptor is extracted.
/**
* Detects features inside the two images and computes descriptions at those points.
*/
private void describeImage(T input, List points, FastQueue descs )
{
detDesc.detect(input);
for( int i = 0; i < detDesc.getNumberOfFeatures(); i++ ) {
points.add( detDesc.getLocation(i).copy() );
descs.grow().setTo(detDesc.getDescription(i));
}
}
Below is the main function that invokes everything. It specifies the image to process, the image format, and which algorithms to use.
public static void main( String args[] ) {
Class imageType = ImageFloat32.class;
// select which algorithms to use
DetectDescribePoint detDesc = FactoryDetectDescribe.surfStable(
new ConfigFastHessian(1, 2, 200, 1, 9, 4, 4), null,null, imageType);
ScoreAssociation scorer = FactoryAssociation.defaultScore(detDesc.getDescriptionType());
AssociateDescription associate = FactoryAssociation.greedy(scorer, Double.MAX_VALUE, true);
// load and match images
ExampleAssociatePoints app = new ExampleAssociatePoints(detDesc,associate,imageType);
BufferedImage imageA = UtilImageIO.loadImage("../data/evaluation/stitch/kayak_01.jpg");
BufferedImage imageB = UtilImageIO.loadImage("../data/evaluation/stitch/kayak_03.jpg");
app.associate(imageA,imageB);
}
Image above shows pairs of detected and associated interest points inside two images at different orientations. That's it for now!