Click here to Skip to main content
12,998,654 members (130,508 online)
Click here to Skip to main content
Add your own
alternative version

Tagged as


5 bookmarked
Posted 30 Nov 2011

Augmented Reality Part 1: Getting Orientation Data

, 29 Nov 2011
Rate this:
Please Sign up or sign in to vote.
Getting Orientation Data


A few days ago, I started developing a program for controlling my computerized telescope with my phone. Before I knew it, I found myself in topics that are all a part of augmented reality. I thought it may be helpful to others if I collected my notes together to share. I'm trying something new with this post also. In addition to the blog post and the code, I've also made this information available in video form at the links above. The video and this post cover the same information.

Before getting started with Augmented Reality, you will want to make sure that you have a device that has the supported features. A lot of the concepts that I share in this series could be applied to other phones, but I will be concentrating on Windows Phone and taking advantage of the features and functionality that it has to provide. If you want to make use of the information in these posts on other devices, you may have to find or make implementations for some high level functionality if your device and development environment does not already provide it.

To get started, you will need a computer with the Windows Phone Developer Tools installed and a Windows Phone. Not all Windows Phones will work though. The phone will need to have GPS hardware and a compass/magnometer. All Windows phones GPS hardware but not all phones have the magnometer. There are some devices that have a magnometer but don't have the driver needed to make it available to third party developers. At the time I am writing this, I have two phones that have Mango installed. My HD7 has a compass but it does not have the necessary driver. My Samsung Focus does have the necessary driver. Before we go any further, let's ensure that your device supports the magnometer. If it doesn't, you'll need to find a device that does before you can proceed any further with development.

Does My Phone Support the Needed APIs

To test your device, create a new Windows Phone project and add a reference to Microsoft.Devices.Sensors. You only need one line of code to check whether or not your device has a mangometer.

bool compassFound = Motion.IsMotionAvailable;

Set a breakpoint after that line and run the program on your phone (at present, it will always return "false" on the emulator). Hopefully it will return "true" for you.

Getting the Device's Orientation

Once you have a device that supports the compass, let's get the device's orientation. At the very least, your device has a magnometer and an accelerometer in it. It may also have a gyrometer in it too if it is a more recent device. You could get the readings from all of these sensor's individually but we'll rely on the Motion API to get the information from the sensors that it finds present and let it perform the math needed to get the data in an easy-to-consume form. The general pattern that you will use when interacting with the sensor APIs is that you'll create an object to represent a sensor, subscribe to an event to get notifications about the reading changing, and then call a Start() method to turn on the sensor and to start receiving data from it.

The compass (and thus the Motion API) requires calibration from time to time. In addition to an event that is called when there is sensor data, there is also an event for notification that the device requires calibration. If this event is fired, you'll need to tell the user to move his or her phone in a figure 8. Once the phone is calibrated, your application will start to receive readings and you can remove the notification.

bool IsMotionAvailable {get; set; }
private Motion _motion;

IsMotionAvailable = Motion.IsSupported; 
if (IsMotionAvailable)
    _motion = new Motion();
    _motion.Calibrate += new EventHandler(motion_Calibrate);
    _motion.CurrentValueChanged += new EventHandler>(motion_CurrentValueChanged);

The information that comes back from the Motion API tells us both the device's current orientation and movement/acceleration. For now, we are only concerned with the device's orientation and will ignore the other data that is available. For now, the fields of interest are the Pitch, Yaw, Roll, and Quanternion. The fist three figures are also used when describing the motion of an aircraft. If an aircraft is changing its pitch, that means that the front of the plan is being tilted up or down. If the airplanes wings remained level but it started moving to the left or right, then its yaw is changing. And finally if the plane starts to tilt to the left or right, then we would say the plane is rolling. These terms are applied to the phone in a similar way. If you have the device laying face up on a level table with the top of the device facing north than its pitch, yaw, and roll are all set to zero. (I will call this the "zero position"). As you change the device's orientation, these fields will change accordingly. The Motion API returns rotational measurements in radians. This makes sense given that the math functions available from the .NET Framework also work with radians. But when displaying them on the screen, it is easier to work with degrees. So for display only, I have radian to degree converter.

public class RadToDegreeConverter : IValueConverter
    public object Convert(object value, Type targetType, 
                                      object parameter, 
                                      System.Globalization.CultureInfo culture)
        double v;
        if (value is float)
            v = (float)value;
        else if (value is double)
            v = (double)value;
        else if (value is decimal)
            v = (double)(decimal)value;
            return String.Empty;
        v = v * 180d / Math.PI;
        return v.ToString("000.0");

    public object ConvertBack(object value, 
                                             Type targetType, 
                                             object parameter, 
                                             System.Globalization.CultureInfo culture)
        throw new NotImplementedException();

The Quanternion figure that comes back also contains rotational information and is consumable by the XNA vector classes. I use the Yaw, Pitch, and Roll for display purposes only but use the Quanternion field in actual algorithms. Your augmented reality application will want to know the direction that the camera on the phone is facing. Assuming that you are using the rear facing camera, that means you will want to know the direction that the rear of the device is facing. To get this information, start with creating a Vector3 that represents the direction that the rear of the device is facing when it is at the zero position. This vector will get rotated with the data that comes back from the Motion API and the resulting vector tells us which way the device is facing.

Vector3 DeviceFaceVector = new Vector3(0,0,-10); 
Vector CurrentDirectionVector { get; set; } 

void motion_CurrentValueChanged(object sender, SensorReadingEventArgs e) 
 var attitude = e.SensorReading.Attitude;
 this.Dispatcher.BeginInvoke(() => 
     IsCalibrated = true; 
     Pitch = attitude.Pitch; 
     Yaw = attitude.Yaw; 
     Roll = attitude.Roll; 
     CurrentDirectionVector = Vector3.Transform(DeviceFaceVector, attitude.Quaternion); 

The CurrentDirectionVector will tell us what direction the back of the device is facing. Let's convert it from a Cartesian (x,y,z) coordinate to a polar coordinate so that we can display the direction the phone is facing (measured as degrees from north) and the angle towards the sky or ground that the phone is tilted. Only a few function calls are needed to do this conversion.

void CalculateAltaAzimuth()
    double x = CurrentDirectionVector.X;
    double y = CurrentDirectionVector.Y;
    double z = CurrentDirectionVector.Z;

    ViewingAzimuth = Math.Atan2(x, y);
    ViewingAltitude = Math.Atan2(z, Math.Sqrt(x*x + y*y));

Displaying the Values on the Screen

To display these values on the screen, I will make use of data binding. But I also want to have the camera's point of view used as the program background. Eventually, we will be overlaying images on this background. To have the image as a background, create a framework element (grid, rectangle, canvas, or some other element) that is stretched over the portion of the screen in which you want the background to display. I'll have it stretched over the entire screen.

<Canvas Width="800" Height="480" x:Name="RealityOverlay">

In the code-behind, I need to make a video brush that will be used to paint this surface. The video-brush will have the camera set as its source.

_photoCamera = new PhotoCamera(CameraType.Primary);
VideoBrush vb = new VideoBrush();
PhotoBackground.Fill = vb;

This gets us as far as knowing how to tell the device's position. In the next post, I'll show how to get the distance and direction to points of interest. Then I'll show how to project those points of interest onto the screen so that they appear to be part of the environment.

Video Explaining the Above

3 Minute Code Walkthrough


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Joel Ivory Johnson
Software Developer Razorfish
United States United States
I attended Southern Polytechnic State University and earned a Bachelors of Science in Computer Science and later returned to earn a Masters of Science in Software Engineering.

For the past few years I've been providing solutions to clients using Microsoft technologies for web and Windows applications.

While most of my articles are centered around Windows Phone it is only one of the areas in which I work and one of my interests. I also have interest in mobile development on Android and iPhone. Professionally I work with several Microsoft technologies including SQL Server technologies, Silverlight/WPF, ASP.Net and others. My recreational development interest are centered around Artificial Inteligence especially in the area of machine vision.


You may also be interested in...

Comments and Discussions

QuestionGreat article Pin
HaraldHeide6-Aug-12 23:33
memberHaraldHeide6-Aug-12 23:33 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Permalink | Advertise | Privacy | Terms of Use | Mobile
Web02 | 2.8.170622.1 | Last Updated 29 Nov 2011
Article Copyright 2011 by Joel Ivory Johnson
Everything else Copyright © CodeProject, 1999-2017
Layout: fixed | fluid