Click here to Skip to main content
Click here to Skip to main content

Tagged as

BoofCV: Real Time Computer Vision in Java

, 23 Jun 2014
Rate this:
Please Sign up or sign in to vote.
Introduction to and an example of how to use BoofCV

Introduction 

BoofCV is a new real time computer vision library written in Java.  Written from scratch for ease of use and high performance, it provides a range of functionality from low level image processing, wavelet denoising, to higher level 3D geometric vision.  Released under an Apache license for both academic and commercial use.  BoofCV's speed has been demonstrated in a couple of comparative studies against other popular computer vision libraries (link).

To demonstrate BoofCV's API, an example of how to associate point features between two images is shown below.  Image association is a vital component in creating image mosaics, image stabilization, visual odometry, 3D structure estimation, and many other applications. 

BoofCV's website contains numerious examples and several tutorials.  You can run a multitude of Java Applets in your webbrowser to see its features before installing.   BoofCV also has a youtube channel explaining different concepts and examples.

Website: http://boofcv.org
Version: Alpha v0.17
Date: June, 19 2014
Author: Peter Abeles   

boofcv_alpha_v02/200px-Example_binary_labeled.png boofcv_alpha_v02/Example_interestpoint_detected.jpg
Binary Image Processing  Image Registration and Model Fitting  Interest Point Detecting  
Camera Calibration Stereo Vision
Superpixels Dense Optical Flow
 
Object Tracking Visual Odometry  

Video Tutorials and Demonstrations 

Image Registration Example  

BoofCV provides several different ways to register images.  Most of them fall under the category of interest points.  In this context, an interest point is a feature inside the image which can be easily and repeadily recognized between multiple images of the same scene from different points of view.    If Java is set up in your browser, then you can see feature association in action by taking a look at this applet:

 Feature Association:  http://boofcv.org/index.php?title=Applet_Associate_Points

In the example, below the two images are registered to each other in several steps:

  1. Detect interest points
  2. Describe interest points 
  3. Associate image features  

In the block of code below the class is defined and several classes are passed in.  These classes are abstract interfaces which allow several algorithms to be swapped in for each other.  New ones can be easily added in the future.  While not shown in this example, the un abstracted code is also easy to work with when high performance is required over easy of development.

public class ExampleAssociatePoints<T extends ImageSingleBand, TD extends TupleDesc><t extends="" td="" tupledesc=""> {

	// algorithm used to detect and describe interest points
	DetectDescribePoint<t, td=""> detDesc;
	// Associated descriptions together by minimizing an error metric
	AssociateDescription associate;

	// location of interest points
	public List<point2d_f64> pointsA;
	public List<point2d_f64> pointsB;

	Class<t> imageType;

	public ExampleAssociatePoints(DetectDescribePoint<t, td=""> detDesc,
				AssociateDescription associate,
				Class<t> imageType) {
		this.detDesc = detDesc;
		this.associate = associate;
		this.imageType = imageType;
	}</t></t,></t></point2d_f64></point2d_f64></t,></t>

Below is the meat of the code.  Here two images are passed in they are:

  1. converted into image types that BoofCV can process,
  2. interest points are detect,
  3. descriptors extracted,
  4. features associated, and
  5. the results displayed.

All with a few lines of code. Note that T is a generic type, see the example code.

	/**
	 * Detect and associate point features in the two images.  Display the results.
	 */
	public void associate( BufferedImage imageA , BufferedImage imageB )
	{
		T inputA = ConvertBufferedImage.convertFromSingle(imageA, null, imageType);
		T inputB = ConvertBufferedImage.convertFromSingle(imageB, null, imageType);

		// stores the location of detected interest points
		pointsA = new ArrayList<point2d_f64>();
		pointsB = new ArrayList<point2d_f64>();

		// stores the description of detected interest points
		FastQueue descA = UtilFeature.createQueue(detDesc,100);
		FastQueue descB = UtilFeature.createQueue(detDesc,100);

		// describe each image using interest points
		describeImage(inputA,pointsA,descA);
		describeImage(inputB,pointsB,descB);

		// Associate features between the two images
		associate.setSource(descA);
		associate.setDestination(descB);
		associate.associate();

		// display the results
		AssociationPanel panel = new AssociationPanel(20);
		panel.setAssociation(pointsA,pointsB,associate.getMatches());
		panel.setImages(imageA,imageB);

		ShowImages.showWindow(panel,"Associated Features");
	}</point2d_f64></point2d_f64>

Both images are described using a set of feature descriptors. For each detected interest point a feature descriptor is extracted.

	/**
	 * Detects features inside the two images and computes descriptions at those points.
	 */
	private void describeImage(T input, List<point2d_f64> points, FastQueue descs )
	{
		detDesc.detect(input);

		for( int i = 0; i < detDesc.getNumberOfFeatures(); i++ ) {
			points.add( detDesc.getLocation(i).copy() );
			descs.grow().setTo(detDesc.getDescription(i));
		}
	}</point2d_f64>

Below is the main function that invokes everything. It specifies the image to process, the image format, and which algorithms to use.

	public static void main( String args[] ) {

		Class imageType = ImageFloat32.class;

		// select which algorithms to use
		DetectDescribePoint detDesc = FactoryDetectDescribe.surfStable(
				new ConfigFastHessian(1, 2, 200, 1, 9, 4, 4), null,null, imageType);

		ScoreAssociation scorer = FactoryAssociation.defaultScore(detDesc.getDescriptionType());
		AssociateDescription associate = FactoryAssociation.greedy(scorer, Double.MAX_VALUE, true);

		// load and match images
		ExampleAssociatePoints app = new ExampleAssociatePoints(detDesc,associate,imageType);

		BufferedImage imageA = UtilImageIO.loadImage("../data/evaluation/stitch/kayak_01.jpg");
		BufferedImage imageB = UtilImageIO.loadImage("../data/evaluation/stitch/kayak_03.jpg");

		app.associate(imageA,imageB);
	}

boofcv_alpha_v02/example_matching_mountain.jpg 

Image above shows pairs of detected and associated interest points inside two images at different orientations.  That's it for now!

License

This article, along with any associated source code and files, is licensed under The Apache License, Version 2.0

Share

About the Author

lessthanoptimal

United States United States
Peter Abeles is a researcher in robotics and computer vision. In addition he is the author of several open source projects which include BoofCV, EJML, and JMatBench. His neglected blog can be found at http://peterabeles.com/blog
Follow on   Twitter   Google+

Comments and Discussions

 
QuestionProblem with converting BufferedImage into the type T PinmemberSteven Balzary6-May-14 4:51 
AnswerRe: Problem with converting BufferedImage into the type T Pinpremiumlessthanoptimal20-Jun-14 2:49 
QuestionConvert stabiliz display activity to multispectral color Pinmembermehran_5830-Nov-13 3:39 
AnswerRe: Convert stabiliz display activity to multispectral color Pinmemberlessthanoptimal2-Dec-13 5:18 
GeneralRe: Convert stabiliz display activity to multispectral color Pinmembermehran_583-Dec-13 6:57 
QuestionAmazing Pinmemberrevalo28-Mar-13 3:41 
AnswerRe: Amazing Pinmemberlessthanoptimal28-Mar-13 5:27 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web03 | 2.8.140827.1 | Last Updated 23 Jun 2014
Article Copyright 2011 by lessthanoptimal
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid