I am in the process of developing a game on my Android device, needless to say, since I wanted to reach the broadest possible audience, I have
omitted hardware acceleration. As a result, I needed to write the fastest possible code which would run on most devices. I started with a simple SurfaceView
and used a worker thread to perform all operations on the view. I then ran into a snag during testing - my collision detection algorithm was reporting false positives when image borders overlapped - even though the images bits themselves were not overlapping
In this article I assume that you already know what collision detection entails, and that you are comfortable writing Java code. I also assume a basic knowledge of threads and other UI elements. You can get away with only knowing
the basics, but a definite requirement is a comfortable understanding of how images are laid out in memory i.e., what a bitmap looks like to a computer.
Using the code
A seasoned Java developer would be able to codify the core concepts presented in this article within 15 minutes, assuming the framework (game, collision detection mechanics) are already in place. Most other developers, ranging from beginner to intermediate
would be able to codify the core elements of this article within 1.5 hours - not bad when you consider the break from popular approaches.
What algorithms exist already?
Before I present the algorithm, which is fairly simple, I would like to take a moment to gloss over the most popular methods whereby collisions are tested using only software (no hardware acceleration).
- Plain Old Collision Detection - bitmap boundaries are tested iteratively until bounding rectangles intersect, at which time a collision is reported.
- Border Optimized Collision Detection - bitmaps are given a "border" to shrink the rectangles which are tested for intersection - when intersection of the shrunken regions occur a collision is reported.
- Per-pixel Collision Detection - bitmap boundaries are tested for intersection, when an intersection occurs, each bit of the two bitmaps are tested to see if opaque pixels overlap - if overlap occurs a collision is reported.
As you can see, the average developer has a decent set of choices to make when determining which algorithm to implement, but each particular algorithm has its drawbacks, lets see what potential problems could arise.
- Plain Old Collision Detection - Let's face it, this method only works reliably when your bitmaps are rectangular and have no transparent regions at the edges of the image, this is the fastest method to implement (and easiest to understand) but the results are horrible.
- Border Optimized Collision Detection - This method is slightly more reliable when regular geometric shapes are used, but extra code needs to be added to differentiate between the bitmap dimensions, and the shrunken image it contains, easy to implement but the results are not spectacular - expect false positives and missed collisions.
- Per-pixel Collision Detection - This method is the de facto standard for most modern games, but require hardware acceleration to avoid slowing the game to a crawl. There is a steep learning curve associated when coding close to any hardware, although this has been made easier with libraries such as OpenGL. No false positives, requires intermediate to advanced skill
As you can see, there are pros and cons to each approach, but any developer who has reservations about getting close to hardware, or who simply does not want to incur the learning curve of a hardware library can rest assured that there is a method to test individual pixels while preserving a decent frame rate.
Presenting the algorithm
I call my method the n-Distributed Per-pixel collision detection algorithm, other names exist for it (Dithered collision detection, n-Sampled Collision Detection etc.),
but are either poorly documented or adapted to very specific scientific applications which most novices would not want to wade through. My method is very simple to implement,
we start by writing a very simple Plain Old Collision Detection algorithm (cold be swapped out with Border Optimized Collision Detection, but makes the code harder
to read, and does not provide much speed benefit for general applications), then instead of reporting a collision when we find an intersecting region, we instead report
an "intersect-rect" - this is a rectangular region which describes the dimensions of the overlapping region of both bitmaps - clever coders would simply return
two rect structures (one for each bitmap) which each describe a sub-region of the original respective bitmaps.
Once we have the intersecting regions, we simply divide each dimension by n, this is fairly intuitive when rasterising in software,
and down the number of pixels we will compare by a factor of n in each dimension. Let's look at an example:
If we take two images - let's call them sprites (technical-speak for any movable bitmap which is draw on screen - for our purposes, we will use the term to describe any interactive bitmap which is drawn.) , of dimensions 200 by 200 pixels.
Let's believe that these sprites intersect in the horizontal plane, by an overlap of 50 pixels, which means that the left edge of the second sprite overlaps the right edge of the first sprite by 50 pixels.
Normally we would call this a collision and be done with it, but for our algorithm, we add a final step - walk through the image pixels, skipping n-pixels at each iteration (in each image dimension) and only return a positive hit if any opaque pixels overlap.
This algorithm, although still limited by the cpu, is faster by n orders of magnitude, let's do the math:
For the Per-pixel Collision Detection algorithm, we would compare each pixel of the two source images, until overlap is found in the opaque regions of the image - let's be silly and say that we might end up comparing almost every pixel of both images.
That gives us a comparison of 200 * 200 pixels (if the sprites were sitting squarely on top of each other) which gives us a grand total of 40000 pixel comparison operations. For the adapted algorithm, we can use any small arbitrary number to represent n, I like to use n=2 for great results.
When n = 2, then we compare (200/n)*(200/n) pixels which gives us 100*100 comparisons for a grand total of 10000 comparisons. We have reduced the number of comparisons by three quarters!
In the real world, we will mostly be using larger images, and we will mostly be comparing subsections of images, so we almost always end up comparing less than (width/n) * (height/n) pixels - this can give a very dramatic speedup in most situations. I like to experiment with larger values of n where extreme accuracy is not a limiting factor, this reduces the number of comparisons and
gives us more CPU to dedicate to other game logic subsystems - like physics.
Let's see some sample code
I make no claims as to the correctness of this code, and as such accept no liability for any crashes or mayhem which may ensue after the implementation of the ideas presented here:
boolean detectCollisions(Sprite sprite)
int n = 2;
Rect overlap = new Rect(0, 0, 0, 0);
for each (Sprite _object : _sprites)
if (intersect_rect(sprite.rect, _object.rect, overlap))
if (overlap.getWidth() && overlap.getHeight())
if (intersect_pixels(overlap, n, sprite, _object))
boolean intersect_pixels(Rect _overlapRegion, int n, Sprite left, Sprite right)
for (int y = 0; y < _overlapRegion.Height() / n; y+=n)
for (int x = 0; x < _overlapRegion.Width() / n; x+=n)
int left_image_color = left.image.getPixel(_overlapRegion.left + x, _overlapRegion.top + y);
int right_image_color = right.image.getPixel(x, y);
if ((Color.alpha(left_image_colour) != 255) || (Color.alpha(right_image_colour) != 255))
Remember, that you still have to write the
intersect_rect function - I will leave this as a nice
exercise to do at home, the bulk of the code presented here is functional
and extracted from my game. I have intentionally omitted the overlap region testing function as an
exercise to the reader.
Points of interest
I originally wrote all my collision detection functions to work strictly with rectangles, which very quickly went far South,
since I needed extra variables to keep track of loop indices, and required callbacks to notify that an intersecting region had been found. I have since learned that
it is much simpler to encapsulate the rectangle and the image in a class called Sprite so that all sprite-related functionality was accessible in one convenient object.
Interestingly enough, for small images, it is not critical to optimize the per-pixel detection as loop iterations generally do not drag down
the performance too much, but when image width and height exceed 100pixels, it is time to perform smart pixel testing since having a simple 10 sprites
on screen (sitting squarely on top of each other, 100*100pixels each) very quickly runs into testing 10 * (100*100) * 10 *(100 * 100)pixels - or (10000000000 pixels).
Modern graphics cards can easily test this many pixels in a few milliseconds, but performing the same operation in software can drag down the most awesome of processors,
I have generally found that the value of n is directly related to the size of the image, and tests have shown that images exceeding 200 pixels in any dimension benefit
greatly from larger values of n - note, that it is cautioned against using values of n which exceed any dimension of an image (i1. if your image dimensions are 10*5, then n should be < 5)
11 January - Updated article source to correct syntax errors.
For now, there are no improvements as the algorithm fulfilled the speed requirements of my implementation.