Click here to Skip to main content
15,074,668 members
Articles / Desktop Programming / Win32
Posted 16 Jan 2015


23 bookmarked

Point cloud alignment: ICP methods compared

Rate me:
Please Sign up or sign in to vote.
4.83/5 (8 votes)
15 Aug 2016GPL311 min read
Different methods to align (aka stich, register) point clouds via the ICP (Iterative Closest Point) method


This article shows the implementation of 4 variants of the "Iterative Closest Point" algorithm.

What this article handles:
-a brief comparison of the ICP variants regarding the mathematical background
-discussion and samples for the scaling transformation, especially the inhomogenous scaling
-samples and comparison of translation, rotation and scaling transformations for all discussed ICP variants 
-point cloud samples where the ICP algorithm fails 

My goal is also to show which convergence problems the ICP algorithm may still have on basic level - mainly for finding the right correspondance of points between source and target clouds. The work is in progress:  Perhaps some people will invest a bit of time in comments. From my point of view, these convergence and principal problems should be solved before approaching the nonlinear problems in ICP like outliers etc. which are highly investigated in the literature. 

There is a lot of literature on this field, also some source code, but I did not find a systematic comparison of the different methods with source included, for later usage in own implementations.


The ICP algorithm is used to align (stitch, register) point clouds taken from different angles to a single 3D point cloud.

What does it mean to "align" (register, stitch) point clouds?

It means to match one 2D or 3D point cloud (source cloud) into another (target cloud).
In 2D this problem is well known and solved, lots of programs can "stitch" photos taken from different angles together to form a panorama photo.


Image 1

Brief description of the Iterative Closest Point method

Problem Statement: Match one point cloud (source) into another one (target):

  1. For each point in the source point cloud, find the closest point in the target point cloud. This is efficiently done by a “k-d tree” search algorithm 

  2. Calculate a transformation (combination of rotation, translation and scaling)
    This is done by minimizing a mean squared error cost function, as described in references [1] - [5]. It involves solving e.g. an eigenvalue system or using quaternion maths, which is on the level of higher college math.

  3. Transform the source points using the obtained transformation.

  4. Check if the clouds match on a desired threshold. If not, go to step 1 - Iterate again


Mathematical background of the different ICP methods discussed

There are lots of different ICP methods implemented in hundreds of publications. The difference between them can be categorized regarding the 4 steps of the ICP methods (see the 4 points in "Brief Description of the ICP method").

1. Different algorithms for finding the point correspondance between source and target.
If one assumes that there are N points in the source and N points in the target cloud, a "brute force" method requires N*N operations, which is very slow. A k-d tree does this work in (N ln N) operations. There are also Octree or other variants, with basically the same speed. 

2. Different way of calculating the transformation between two point clouds

For rigid (translation, rotation) and affine (scaling) transformations, most methods use the mathematically driven solution, since the affine transformations has a mathematically closed solution. Some, but very few use other, e.g. probabilistic methods (like Monte Carlo).

For the rotation and translation, the original ICP article of Horn [1] proposes a quaternion method. This has been proven (in the article of reference [6]) to be basically equivalent to others methods like:
-a SVD (Singular Value Decomposition) method used first proposed in [7], used also in [3] and [4] and [5]
method using orthonormal matrices methods see [2]
dual quaternion method [8] 

There is however a difference regarding scale transformations. The original ICP method from 1987 did not include scale transformations, but Horn added this in his publication from 1988 [2].
In real world the scale transformations is important, just imagine that you have a point cloud from one angle, then move towards the target to take the second point cloud. This is basically a scale transformation.
A somewhat comprehensible discussion of scale transformations is done in [3] and [4]. 
In the code I implemented, there are four different methods to include scaling
-as in the "Horn" method of [2] on orthonormal matrix level
-the "Umeyama" [3] method deduces the scale factor on eigenvalue level (his mathematical proof is nice :) )
-the "Zinsser" [4] method deduces the scale factor as a partial derivative 
-the "Du" [5] method is similar to [4], but deduces three scale factors (for the 3 geometry axes) as partial derivates 


3. Other differences of ICP regard how you treat small measurement errors, outliers and missing points.
Measurement errors occur quite often - due to the missing scan precision: The same point in space is represented by slightly different coordinates in the source and target cloud. To some extent this is covered in the base algorithm.
Outliers are erroneous measured points, which should be ignored.
Missing points are the points of one point cloud with no correspondence in the other cloud. This is wanted effect: By scanning the target from another angle, you will of course have points not present in the first scan. 

As these three effects are highly non-linear, a lot of different implementations are published, with weight factors, etc. The methods cannot be really compared, since implementation details are not complete in lots of cases, and of course, they publish mostly working samples of point clouds. It would be nice to have also not - working examples published, this would be good for a method comparison. 

There are also some methods which find some subset of identical points in source and target (e.g. by comparing color, or light intensity). 

Comparison of the ICP methods of [1] - [4]

-The Umeyama [3] and Zinsser [4] behave the same, although the method is different. They give the same results at translations, rotations and uniform scaling. (Uniform scaling means that the scale factor is the same in all 3 spatial directions)

-The Horn [2] method behaves a bit different than [3] and [4] at scaling: The converge behaviour is different (sometimes faster, sometimes much slower), and the performance of a single iteration is slower, at least in the implementation I used, which is ported from the VTK library. I found some cases where [3] and [4] go into false local minima, but the Horn [2] method does OK, and some others where it is the other way around, see also samples below. 

-The Du [5] method is the only one which works for inhomogenous scaling transformations. This is clear, as the others have by definition the same scale factor for all 3 directions. This should basically be better for real objects, where you have to take into account non-linear transformations of the scanned object.
The convergence of the Du method is slower than the other methods. 


Using the code

To run the samples, you would have to first install NUnit and then configure the path of the NUnit exe in the Visual Studio project, in order to run the tests.

Hint: I did not manage to use NUnit 2.6 which uses .NET 2.0, but compiled NUnit using .NET 4.5. Possible that NUnit 3.0 (alpha  version) works as well – I did not try that.

An overview of the testcases:

Image 2


The testcases discussed below are in the folder "UI" and "ExpectedError"

The other ones are work in progress, or the "Automated" ones: A small set of testcases which run quite fast and have to work after each program change.

1. Cube code gallery (testcases in class: ICP_Test5_Cube)

Image 3Image 4Image 5Image 6Image 7

From left to right: Translation, rotation, uniform scaling, inhomogenous scaling, translation + rotation + scaling

Color coding: 
-the source point cloud is white
-the target is green
-red is used for the transformed point cloud. 

If there is nothing red on the screen, then the target (green) overlaps the transformed point cloud, so the ICP result is "success", matching is 100%.

1.a Cube translation sample

In testcase:


The Code of this method, for a short explanation:

 public class ICPTest5_Cube : ICPTestBase
        public void Cube_Translate()
            IterativeClosestPointTransform.ICPVersion = ICP_VersionUsed.Horn;
            IterativeClosestPointTransform.FixedTestPoints = true;
            IterativeClosestPointTransform.ResetVertexToOrigin = false;

            ICPTestData.Test5_CubeTranslation(ref verticesTarget, ref verticesSource, 
                 ref verticesResult);
            Assert.IsTrue(ICPTestData.CheckResult(verticesTarget, verticesResult, 1e-10));

Code explained:

IterativeClosestPointTransform.ICPVersion = ICP_VersionUsed.Horn;
     The ICP method used is the "Horn" method (see [2])
-IterativeClosestPointTransform.FixedTestPoints = true;
     For the cube it is assumed that the relation of source and target points is known, this is configured in the parameter FixedTestPoints
-IterativeClosestPointTransform.ResetVertexToOrigin = false;
    There is the possibility to transform all point clouds to the origin of the coordinate system. This is basically a translation (see e.g. ref. [2]), which in this case would mean that no further ICP has to be done. Therefore this parameter is set to false, so that the effect of translation of ICP can be seen.

If you run the unit test, you will see something like this:

Image 8

​1.b Cube rotation

Image 9
1.c Cube - homogenous scaling (equal scale factor on all 3 axes)

Image 10

1.d Cube - inhomogenous scale (different scale factor on all 3 axes)


Image 11

1.e Translate, rotate and uniform scaling


Image 12

2. Bunny, transformed


Image 13

3. A face - scanned by the Microsoft Kinect, transformed



The face is rotated by 30 degrees, translated by a small vector and scaled by a factor of 0.8

This is done in the method: ICPTestData.Test7_Face_KnownTransformation

VertexUtils.ScaleByFactor(myVerticesSource, 0.8);
Matrix3d R = new Matrix3d();
R = R.Rotation30Degrees();
VertexUtils.RotateVertices(myVerticesSource, R);
VertexUtils.TranslateVertices(myVerticesSource,  -0.15, 0.05, 0.02);

The convergence behaviour is different in the 4 methods investigated. It converges best with the Umeyama [2] and Zinsser [3] method - after 35 iterations. The overlap between target and result is 100%. Using the Horn or Du method, after 50 iterations, the average mean distance between result and target points is still larger than 1 mm.  

Image 14

ICP samples, not working 

If ICP is not converging, the transformed (red) cloud will not overlay the green one (target). Therefore you explicitly see 3 objects.

1. Inhomogenous scale transformation with Horn method
Expected error: This will not converge using methods 1-3, which is obvious. It will converge only with the Du [5] method, see the explanation in the Background part.


Image 15


2. The Bunny object, rotated by 65 degrees
Will also not converge to the target. I assume this is an issue of finding the right correspondance of points in the k-d tree algorithm. 



Image 16

3. Face - two scans from different angles
Not converging


Image 17



Some principal differences between 4 main ICP methods are shown, especially the behaviour of scaled point clouds is important.
In my implementation, even for some simple transformation of models, the ICP method does not converge. More cases can be easily constructed. The cause is mainly finding the right correspondance between source and target points. Mechanisms which assume a subset of point correspondance KNOWN (e.g. by color, light intensity) help.

I am eager to get input from readers on how to improve the code and possibly what I have done wrong. I am grateful for any article link or hint!


  1. “Horn” ICP method
    -Berthold K. P. Horn (1987),  “Closed-form solution of absolute orientation using unit quaternions” in     Journal of the Optical Society of America A, vol. 4, page 629-642. See article link 
  2. Horn BKP, Hilden HM, Negahdaripour S (1988), "Closed-form solution of
    absolute orientation using orthonormal matrices", J Opt Soc Am Ser A, vol 5, page 1127–1135
    -Original python implementation of this method done by David G. Gobbi, later a C++ port in the 
    VTK library, this was ported to C# by me.
  3. “Umeyama” ICP Method
    Umeyama (1991), "Least-squares estimation of transformation parameters between two point patterns", IEEE Transactions On Pattern Analysis And Machine Intelligence, vol. 13, no. 4, page 376–380
  4. “Zinsser” ICP Method
    Timo Zinßer, Jochen Schmidt, Heinrich Niemann (2003), “A REFINED ICP ALGORITHM FOR ROBUST 3-D CORRESPONDENCE ESTIMATION” in IEEE International Conference on Image Processing, September 2003, Barcelona, Spain
  5. “Du” ICP Method
    Shaoyi Du, Nanning Zheng, Shihui Ying, Qubo You, Yang Wu (2007), “AN EXTENSION OF THE ICP ALGORITHM CONSIDERING SCALE FACTOR”, in Conference Image Processing, 2007. ICIP 2007. IEEE International Conference on, Volume: 5, see link
  6. Comparison of ICP Methods
    D.W. Eggert, A. Lorusso, R.B. Fisher (1997), “Estimating 3-D rigid body transformations: a comparison of four major algorithms”, Machine Vision and Applications, vol. 9, page 272–290, Springer-Verlag.

  7. K. S. Arun, T. S. Huang, and S. D. Blostein. Least square fitting of two 3-d point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(5):698 – 700, 1987.

  8. M. W. Walker, L. Shao, and R. A. Volz. Estimating 3-d location parameters using dual number quaternions. CVGIP: Image Understanding, 54:358 – 367, November 1991.

Other Code used

  1. VTK library - parts of the code was ported, basically for the Horn [1] method 
  2. OpenTK control from my OpenTK Code Project article and the list of code used there:
    Image 18

Points of Interest

Further work:

-investigate and improve convergence
-apply the method for scanned data, investigate further convergence problems
-inclusion of inhomogenous transformations. e.g. by extension of the "Du" for other transformations
-speed up iteration convergence
-inclusion of outliers, precision improvement

See also

Point cloud viewer article


Version submitted on 1/16/2015



This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3)


About the Author

Edgar Maass
Germany Germany
Other interests: music playing, photography, travelling

Comments and Discussions

QuestionScale matrix Pin
Member 148538254-Jun-20 11:38
MemberMember 148538254-Jun-20 11:38 
QuestionError in Update ICP at iteration:0 Pin
Member 127796878-Feb-18 15:36
MemberMember 127796878-Feb-18 15:36 
Suggestionwrong reference for [3] Pin
dEvasEnApati10-Oct-16 3:58
MemberdEvasEnApati10-Oct-16 3:58 
QuestionDoes this work on 2D space? Pin
Khanh Nguyen3-Oct-16 19:41
MemberKhanh Nguyen3-Oct-16 19:41 
AnswerRe: Does this work on 2D space? Pin
Edgar Maass4-Oct-16 3:56
professionalEdgar Maass4-Oct-16 3:56 
GeneralRe: Does this work on 2D space? Pin
Khanh Nguyen4-Oct-16 7:44
MemberKhanh Nguyen4-Oct-16 7:44 
Questionhow to use it ? Pin
Member 1242387930-Mar-16 6:01
MemberMember 1242387930-Mar-16 6:01 
AnswerRe: how to use it ? Pin
Member 1242387930-Mar-16 6:10
MemberMember 1242387930-Mar-16 6:10 
GeneralRe: how to use it ? Pin
Edgar Maass30-Mar-16 7:14
professionalEdgar Maass30-Mar-16 7:14 
AnswerRe: how to use it ? Pin
Edgar Maass30-Mar-16 7:18
professionalEdgar Maass30-Mar-16 7:18 
QuestionHow to get the transformation matrix?` Pin
Aarusha Thakral22-Jan-16 5:51
MemberAarusha Thakral22-Jan-16 5:51 
AnswerRe: How to get the transformation matrix?` Pin
Edgar Maass25-Jan-16 10:32
professionalEdgar Maass25-Jan-16 10:32 
GeneralRe: How to get the transformation matrix?` Pin
Aarusha Thakral26-Jan-16 4:20
MemberAarusha Thakral26-Jan-16 4:20 
GeneralRe: How to get the transformation matrix?` Pin
Edgar Maass27-Jan-16 5:47
professionalEdgar Maass27-Jan-16 5:47 
QuestionPoint Number vs. Order Pin
davoshaw1-Oct-15 4:56
Memberdavoshaw1-Oct-15 4:56 
AnswerRe: Point Number vs. Order Pin
Edgar Maass1-Oct-15 5:46
professionalEdgar Maass1-Oct-15 5:46 
QuestionHow to save the merged model? Pin
mienx24-Aug-15 16:57
Membermienx24-Aug-15 16:57 
AnswerRe: How to save the merged model? Pin
Edgar Maass30-Sep-15 8:42
professionalEdgar Maass30-Sep-15 8:42 
Questiondoes this method work for color point cloud Pin
Member 1187649431-Jul-15 1:01
MemberMember 1187649431-Jul-15 1:01 
AnswerRe: does this method work for color point cloud Pin
Edgar Maass30-Sep-15 8:41
professionalEdgar Maass30-Sep-15 8:41 
QuestionHow to run? Pin
toengel31-May-15 23:17
Membertoengel31-May-15 23:17 
AnswerRe: How to run? Pin
Edgar Maass31-May-15 23:34
professionalEdgar Maass31-May-15 23:34 
GeneralRe: How to run? Pin
toengel31-May-15 23:56
Membertoengel31-May-15 23:56 
GeneralRe: How to run? Pin
Devin Kong6-Nov-15 21:55
MemberDevin Kong6-Nov-15 21:55 
GeneralRe: How to run? Pin
Edgar Maass6-Nov-15 22:31
professionalEdgar Maass6-Nov-15 22:31 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.