Click here to Skip to main content
12,758,605 members (30,381 online)
Click here to Skip to main content
Add your own
alternative version

Stats

6.6K views
76 downloads
9 bookmarked
Posted 16 Jun 2016

Introduction to HoloLens Development with UWP

, 16 Jun 2016 CPOL
Rate this:
Please Sign up or sign in to vote.
In this article I take a look at setting up a system for HoloLens development, the compatibility of the applications with other UWP platforms, and introduce Unity for making a 3D application

Introduction

Download HololensIntroduction.zip

If you are like me then you have not yet had the opportunity to purchase your own HoloLen. At the time that I'm writing this I'm currently in Wave 5 of the invites to purchase the HoloLens. While I don't own a headset myself the Emerging Experiences team at Razorfish has given me the benefit of being able to have some quality time with one of the units. This writeup is going to be light with respect to code but an introduction nontheless. The HoloLens is a new experience and not quite like the other Windows 10 devices and this new experience deserves some attention. Within this post I'm only trying to get a couple of Hello World type applications deployed to the HoloLens and when possible deploy the same program to other devices.

What is It?

One of the descriptions that Microsoft gives for the HoloLens is "Windows 10 is the first platform to support holographic computing with APIs that enable gaze, gesture, voice, and environmental understanding on an untethered device." Another is "Microsoft HoloLens is the first fully untethered, holographic computer, enabling you to interact with high definition holograms in your world." I feel compelled to make a comment on word usage here. Views on word usage are sometimes divided between the prescriptive view and the descriptive view. The prescriptive view is that words must be used in a specific way and usages that don't conform are wrong. The descriptive view is that words mean what ever people intend for them to mean. As long as the speaker and the audience agree on intended usage all is right in the universe. I don't cleanly fall into one or the other as I might switch depending on context. I'm inclined to take the prescriptive view with the word "hologram". What one sees through the HoloLens are not holograms in the sense of the word as used in holography; there are are not (yet) images made light field reconstruction. While the HoloLens doesn't have anything to do with holography there's arguments that can be made justifying the usage of the term here. I've mentioned the justifications in something I wrote recently. With that said I won't raise any further argument against the use of the world "hologram" and will take the descriptive view for the sake of being able to communicate with others about the HoloLens. And for those that take a prescriptive view of language check out the book "Bad English" by Ammon Shea. It's one of many books that talks about how word usages once found objectionable are now part of accepted everyday language.

stereoscopic imaging history

In more general terms the HoloLens is a mixed reality or augmented reality platform. It is able to overlay application windows or 3d objects within one's view of the real world. This differs from Virtual Reality(VR) in that VR blocks one's view of the real world. The HoloLens is also a Windows 10 based platform. Applications that target the Universal Windows Platform (UWP) for Windows 10 will generally work not only on the HoloLens but will also work on Windows 10 desktops/laptops, Windows 10 Mobile, Xbox One, or supported IoT platforms such as the Raspberry Pi. As of yet that type of interoperability doesn't exists without an asterisk or two attached. There are some features that are not yet available on some of these platforms. For example the Xbox One developer preview supports HTTP/WebSocket based network calls but doesn't support TCP/UDP calls. Calls related to SMS won't work on platforms that don't have SMS capabilities. These devices tend to have different primary modes of interaction; voice, game controller, touch keyboard, physical keyboard, and gestures and gaze. I can't make a meaningful chart of which devices have which forms of interaction since they are optional on many of these device types; a computer might or might not have a keyboard attached. The HoloLens support keyboards and mice over bluetooth but I wouldn't generally expect one to be attached to a device. Microsoft has already done some work in abstracting away some of the differences between these device types. But they are differences of which to be aware.

About the Interactions

Being a Windows 10 device the HoloLens reminds me more of the experiences from the Mobile windows devices such as the Windows RT tablets or the phones. At the time of the release only 1 application was allowed to be active at a time. Though with the 2016 May update up to 3 applications may be active. It's also a single-user device. Only one live account can be on it at a time. Changing the live account that the device uses is done through wiping it's memory. User input on the HoloLens will generally be done with gestures, by gazing at elements, and voice input. My first time using the HoloLens I didn't make full use of voice input. Text can be entered by looking at individual keys on a virtual keyboard and selecting them; a form of hunting and pecking that can be a bit fatiguing for large amounts of text entry. But it's not necessary to enter text that way. Dictation works much better. Selection of an element can be done by turning one's head to center their gaze on it and making a tapping gesture in the air. There's also an accessory that comes with the HoloLens that has a button that can be clicked instead of using the air tap gesture. On elements that allow scrolling

Setting up a Development Environment

For developing on a HoloLens you'll need a computer running Windows 10 and Visual Studio Update 2. You'll want to install one of the HoloLens enabled versions of Unity (betas at the time of this writing) and the HoloLens SDK. The memory requirements for a computer doing HoloLens development will depend on whether or not you have the physical hardware. With the physical hardware in hand I got by just fine on a computer that had 8 gigs of memory. When trying to use the emulator 8 gigs isn't enough. 12 gigs worked out okay. I would suggest at least 16. If you are using the emulator you'll also need to be using a Professional or Enterprise edition of Windows 10. Otherwise you won't be able to make use of the emulator. When developing a 2D XAML based application you can test it our on your local machine with no emulation needed. If you want to make an immersive experience (which Microsoft calls "holographic applications" which have a "volumetric view") you'll need either the emulator or a hololens

Departure from the Usual

If you've already developed fow Windows platforms but haven't done much development on Windows Store Apps (WinRT or Windows 10 Universal) there are some ways of performing tasks on non-WSA that won't quite be appropriate in WSA. Prior to WSA potentially long running calls (which is any call that if allowed to block the UI thread would make the UI appear less responseive) could still be called on the UI thread even if it were not the best of ideas. There were asynchronous calls available for developers that wanted their application to be more responsive. Usually long running calls dealt with file or network calls. Prior to WSA handling some of these calls asychronously was optional. WSA would force a more responsive implementation. With the exception of writing to device files there was not much in place by default that would prevent pre-WSA applications from writing to arbitrary places on a person's file system. File permissions on the system will prevent reads or writes to some locations from succeding but there was nothing at the API level preventing the attempt. This changed with WSA. Applications have their own isolated storage (a concept that had existed before but a developer had the option of not using it). Applications don't have access to tread over each other's application storage. There are also common areas to which data can be written and providing a way to export information from an application to a place where ic can be accessed by the user or share dwith other applications. When this type of storage us used the application doesn't have access to the actual absolute path at which it is saved. If you've not already been using WSA or Isolated storage this way of handling files will take some getting used to.

To illustrate a difference the following two code blocks are from a desktop .Net application that is writing "Hello World" to a file and from a UWP application that is writing the same text to a file. For both code blocks I've tried to make the least number of calls possible to get the job done.

//writing to a files on a .Net desktop application
using(StreamWriter sw = new StreamWriter("readme.txt"))
{
	sw.WriteLine("Hello World");
}
//writing to application storage in a UWP application
StorageFolder storageFolder =  ApplicationData.Current.LocalFolder;
StorageFile storageFile = await storageFolder.CreateFileAsync("readme.txt", CreationCollisionOption.ReplaceExisting);
using (Stream stream = await storageFile.OpenStreamForWriteAsync())
{
    using (StreamWriter sr = new StreamWriter(stream))
        sr.Write("Hello World");
}

While there are more calls in the second code block for writing to a file it has advantages with respect to security and the responsiveness of the application. On security, access to the full path at which a file is saved can indirectly disclose personal identifying information on a person to an application. With such a small about of data being written the difference in the impact that there is on responsiveness may not be descearnable. As the amount of data to be written or operations to be performed becomes more substantial the gap in the differences in the user experience from these two methods of writing files widens.

Types of Application Views

Applications run on the HoloLens can be categorized as being either as having 2-dimensional views or holographic views. Applications that use 2D views can generally be run on on other other UWP implementations. They are what one might call regular applications. When run on a HoloLens 2D views can be placed in in space and will retain their position. If a text entry field is given focus a virtual keyboard will display on the HoloLens. While the HoloLens supports a mouse a person generally will not be using one and item selection is instead controlled by gazing at the element to be selected and the selection gesture is made. Holographic apps take control of the entire view of the HoloLens. No other application will be visible when one is running.

Building our First UWP Application

This first application will be of trivial complexity. We're going to build a text editor. While this is a simple application if you've not done any UWP development it may introduce you to something not yet familiar. As an optional exercise you can try to deploy this application to other types of devices in the UWP family. Start Visual Studio 2015 Update 2 and create a new project. Create a new project. Select "Blank App (Universal App)" and name the project TextEditor. You'll be presented with a dialog that lets you select the maximum and minimum supported build number for the application. For the sake of compatibility across devices (as your devices might have different builds ) keep the minimum supported value at the lowest value available. Create a new folder in the project named ViewModels. For now a couple of properties are being added to the class for the file name and the contents of the document. These properties will be exposed with data binding. INotifyPropertyChanged is implemented for this. The implementation of the view model looks like the following.

using System;
using System.ComponentModel;
using System.Linq.Expressions;
namespace TextEditor.ViewModels


{
    public class MainViewModel : INotifyPropertyChanged
    {

        string _fileName;
        public string FileName
        {
            get { return _fileName;  }
            set
            {
                if(_fileName!=value)
                {
                    _fileName = value;
                    OnPropertyChanged(() => FileName);
                }
            }
        }

        string _text;
        public string Text
        {
            get { return _text;  }
            set
            {
                if(_text != value )
                {
                    _text = value;
                    OnPropertyChanged(() => Text);
                }
            }
        }

        protected void OnPropertyChanged(string propertyName)
        {
            if(PropertyChanged != null)
            {
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
            }
        }

        protected void OnPropertyChanged<T>(Expression<Func<T>> expression)
        {
            OnPropertyChanged(((MemberExpression)expression.Body).Member.Name);
        }

        public event PropertyChangedEventHandler PropertyChanged;
    }
}

Also make a new folder in the project named Views. The editor view will contain a text box editing and a save button. Within the Views folder create a new UserControl named EditorView.xaml.

<UserControl
    x:Class="TextEditor.Views.EditorView"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:local="using:TextEditor.Views"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d"
    d:DesignHeight="300"
    d:DesignWidth="400">

    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto" />
            <RowDefinition Height="*" />
            <RowDefinition Height="Auto" />
        </Grid.RowDefinitions>

If you press the debug button then it will immediately run on your machine. Before we try to run it on the hololens let's add the save and load functionality. The FileOpenPicker and the SaveFilePicker must be used to get a reference to the file stream. The only information on the file that the program will have access to is the files name but not it's path. The file pickers need a list of the extensions of the files that the application wants to handle. When called the file picker will display, the user selects a file, and the file stream is returned back to our program.

public MainViewModel()
{
    FileTypeList = new Dictionary<string, IList<string>>();
    FileTypeList.Add("Text Document", new List<string>() { ".txt", ".text" });
    FileTypeList.Add("HTML Document", new List<string>() { ".htm", ".html" });
}

async void  SaveFile()
{            
    FileSavePicker fileSavePicker = new FileSavePicker();
    foreach(string key in FileTypeList.Keys)
    {
        fileSavePicker.FileTypeChoices.Add(key, FileTypeList[key]);
    }
    StorageFile file = await fileSavePicker.PickSaveFileAsync();
    if(file != null)
    {
        CachedFileManager.DeferUpdates(file);
        await FileIO.WriteTextAsync(file, Text);
        FileUpdateStatus status = await CachedFileManager.CompleteUpdatesAsync(file);
        FileName = file.Name;
    }
}

async void OpenFile()
{
    FileOpenPicker fileOpenPicker = new FileOpenPicker();
    foreach (string key in FileTypeList.Keys)
    {
        foreach(string extension in FileTypeList[key])
        {
            fileOpenPicker.FileTypeFilter.Add(extension);
        }
    }
    StorageFile file = await fileOpenPicker.PickSingleFileAsync();
    if (file != null)
    {               
        Text = await FileIO.ReadTextAsync(file);
        FileName = file.Name;
    }
}

It's also necessary to bind the buttons in the interface to the commands so that they can be invoked in the user interface. The XAML for EditorView.xaml will need to be edited

 <Button Content="Save" HorizontalAlignment="Stretch" Command="{Binding SaveFileCommand}" />
<Button Content="Load" Grid.Column="1" HorizontalAlignment="Stretch" Command="{Binding OpenFileCommand}" />

Run the program on your local machine. Add a few lines and save the file. If you save the file to your OneDrive account it will be available from the other devices. While all of these devices will allow the application to run as we will see some of the devices might not already have the file picking services installed. While the "Write Once Run Everywhere" dream isn't yet realized. It is a lot closer thhough. The differences between different machines will be on capabilities and the presence or absences of services It will be possible to build applications across devices that share a lot more source and logic.

Deployment Across Devices

Most of what follows in this section is how to deploy to various devices types that support UWP applications. If you are not interested in these other devices you may want to skip down to the section on HoloLens deployment. It's possible that you are deploying to the device on which you are programming, to a device attached to your development machine with a USB cable, or to a device over a network connection. There are also different processor architectures that you could be targeting. Both will need to be specified. There is a panel in your tool bar that allows you to select a Debug or Release build, whether you are targeting a x86, x64, or ARM based processor. There's is also a drop down in the run burron that you can use to select your intended deployment target.

Processor selection drop down and Run button

 

  • Local Machine - Run the program on the machine on which you are developing
  • Simulator -
  • Remote Machine - A machine connected to the same network. It's IP address will be needed
  • Device - a device connected to your local machine with a USB cable. Could be a Windows 10 Mobile device or a HoloLens
  • HoloLens Emulator - A software emulator for the HoloLens
  • Mobile Emulator xxxx - An emulator for a Windows 10 Mobile device. A number follows the name identifying the Windows build supported by the emulator

 

If you select Remote Machine as the target you will need to set the IP address or name of the target device. To do this right-click on your project in the solution explorer and select Properties. On the Debug tab you can set the Remote machine address or name. This will only be enabled Remote Machine is selected as the deployment target.

Windows 10 Mobile Deployment

If you have a Windows 10 Mobile device you can also deploy it to the phone with a few changes in settings. For the dropdowns for the deployment settings change the processor architecture from x86 to ARM. If you've never deployed to your Windows 10 Mobile device before you'll need to change a setting. On the device navigate to Settings, "Updates & Security" and "For developers". Under "Use developer features" select "Developer mode". Connect the phone to the computer using a USB cable and run the application. It will deploy to the phone and run.

Windows 10 Mobile  Deployment Settings

Xbox One Deployment

In the Xbox One app store there is an application called "Dev Mode Activation". Download and run the application. It will prepare your Xbox One for development deployments. You will need to have 30 gigs of drive space free for this application. When you enable development mode through the application it will reboot the Xbox One. It's best to view this as though another instance of the operating system is running from a different partition. The contents of your Xbox One that are available in Dev Mode are isolated from those that are on the main partition. One the Xbox reboots select "Dev Mode Home". (note: from hereon I had a USB keyboard connected to the Xbox since it was easier to edit some of the text settings with a real keyboard instead of the on-screen keyboard). Take note of the IP address of the Xbox. It will be displayed on this screen. In Visual Studio right-click on your project and select "Properties". In the "Debug" tab ensure that "Remote Machines" is selected and enter the IP address of your xbox here. Set the "Platform" setting to x86. When you attempt to run the program you'll receive a prompt informing you that you need a code to pair the device with Visual Studio. Select "Pair with Visual Studio" from the DEV HOME console to get a code. Enter the code that displays on your Xbox's screen. The application will deploy and run. However, you'll notice that the FileOpenPicker and FileSavePicker don't respond. The Xbox One doesn't yet support the file pickers. If it's being included in the platforms that you plan to support you'll want to take a look at the APIs that are not yet supported ( see here) and have your application gracefully withdraw or use alternative functionality.

Windows IOT Core Deployment

At the time of this writing there are several embeded devices that can run Windows IOT core. This includes the Raspberry Pi 2 and 3, the Minnowboard Max, and some other boards. The setup instructions for these boards usually involve inserting the memory card into your computer, and running the Windows IoT Dashboard. It has an option to setup a new device. Select your device type and the memory card, and select "Download and Install"

Windows IOT Dashboard

After the memory card has been written insert it into your device, connect a network cable, and allow it to boot up. Note that the first time the devices boots up may take a long time. After the device boots up it should display among other things it's IP address. This information will be needed for deployment. You can also get the device's IP address from the Windows IOT Dashboard.

Windows IOT Device List

Deploying the application will be much like it was for the Xbox One. Right-click on the project, select Properties, and select the Debugging tab. Change the "Target device" to "Remote Machine", set the IP address in the "Remote Machine" setting to the IP address of your Windows IOT Device, and set the "Authentication Mode" setting to "Universal (Unencrypted Protocol)" Run the application and it will deploy and run to the Windows IoT device. You'll be able to succesfully edit text but like the Xbox One the SaveFilePicker and OpenFilePicker don't appear to work. A solution for this would involve first detecting if the machine on which the program is running has access to the file pickers and display our own if the are non offered by the operating system. I will talk about how to do this in a future post.

Windows IOT

My deployment settings for targeting a Raspberry Pi

Deploying to the HoloLens

Deployment to the HoloLens will be very much like deployment to the other devices. You'll need to get the IP address of your HoloLens. If you open the Settings application and select "Network & Internet" the IP address will be displayed when you select "Advanced Options" Update the IP address in the project's properties. Navigate back to the main page of the Settings app and select "Update & Security&uot;. Select the "Dfor dveloper" tab and turn "Developer mode" on. You can now run the program. Visual studio will deploy the device and it will begin to run. Initially you'll notice that the FileOpenPicker and the FileSavePicker doesn't work, but you are prompted with a dialog informing you that with the installation of an additional program they wil be available. Open the device's app store and search for and install OneDrive. After it's installed if you return back to the application the pickers will now respond.

 

The Editor running in the Emulator

Building an Application with a Holographic View

The first application I demonstrated didn't take advantage of the HoloLen's ability to overlay rendered objects on the real world. This next application will do just that. Visual Studio 2015 Update 2 will still be needed, but we are going to start with Unity. At the time of this writing the build of Unity needed for HoloLens development is still a beta. If you haven't installed it already you'll want to get the latest build which can be found at Unity's site. You'll need to install the Unity editor and the UWP runtime from this site. Once they are both installed start Unity. If you already have a Unity account you'll need to login. If not creating an account is easy and free. Once logged in select the option to create a new project.

New Unity Project Dialog

Ensure that the 3D option is enabled and select "Create Project".

In the next sections I will be refering to areas of the Unity editor by names. The image below will help in identifying where the areas are. Note that this placement assumes that you have the default layout and haven't changed the position of the UI elements.

  • Hierarchy - Shows the objects that make up a scene. You might think of a scene as being like a level in a game.
  • Project - Shows folders that make up the project on which you are working. Selecting a folder will show which assets are in it
  • Assets Individual resources such as scripts, graphics, and scenes that are part of your project
  • Inspectorb> - shows properties for a selected object. An object can be selected in any of the other panels to see more details and settings about it in this window.
  • Scene - A layout of the either the current scene being manipulated or a prefabricated object.

The names of the panels in Unity

Once the project is created there are some settings in Unity that need to be changed for the HoloLens. Open the "Edit" menu, select "Project Settings" and then "Player". The player settings will appear in the right pane. Ensure that the "Virtual Reality" checkbox is set and that "Windows Holographic" displays as a supported headset. If you don't see "Windows Holographic" click on the plus button below it to add it. Settings on the camera also need to be updated. In the Scene layout (the area of the Window on the left) click on Main Camera. The camera properties will appear on the right. Under "Transform" set the X, Y, and Z settings all to zero. Under "Camera" set the "Clear Flags" to "Solid Color" and set the "Background Color" to Black. This will cause the HoloLens to render black where there are no objects. On the HoloLens black is the same as transparent.

If we were to run the application now there would be nothing to see. Let's put some objects in the world space so that we have something to see. for now we'll use basic geometry objects that are built into Unity. In Hierarchy view right-click in a blank area and select "3D Object" and then "Cube". Click on the Cube in the scene hierarchy to edit it&qpos;s properties. Set its Position to 0.0, 0.5, 4. This will place the cube in front of the camera at a position that will appear to be 4 meters away. To make sure that it is positioned properly click on the

Preview within Unity

If all looks well the next step is to export the project from Unity to Visual Studio. Open the File menu and select Build Settings. Select Windows Store from the list of platforms. Set the SDK version to "Universal 10" Select "Build". If you've worked with Unity projects before this is the point at which Unity will usually make a build of application that is ready to run. For this project this still will cause a Visual Studio project to be created. It's from Visual Studio that the project will be deployed. When you select "Build" you'll be prompted for a folder in which to write the project. I usually create a folder named App that is a child folder of the project folder and select it. Unity will build the project and when it's done it opens the file explorer to show the folder that contains the new project. Open the folder to find the project's solution file and open it. You can deploy it to the HoloLens using the same steps that were used to deploy to HoloLens previously. When the application begins running you should see the cube floating in front of you. It's possible to walk around the cube and look over it. The HoloLens will take care of tracking your movements and adjust the camera according to your movments. This works without us having written any code. Let's try something a little different.

Unity, GameObjects, and Code

Objects within unity are made of a number of components. The components that make up an object are shown in the hierarchy on the right when an object is selected from the scene on the left. . We are going to make a project that is more interactive. It will be able to recognize the tap gesture, a verbal command, and gaze will play a role in this interaction. We'll make a simple game in which planes are flying about and we can aim and shoot at them. In Unity open the File menu and select New Project. Enable VR support for UWP applications like you did before.

We need to make two visuals to represent the missle and the plane. Ideally these would be made with external assets that were imported into Unity. I'm going to use the geometry objects built into Unity to create placeholders. We can replace them later. Let's make the missle first. In the hierarcy panel on the right right-click in a blank area and select "Create Empty". This empty object will serve as the common parent for all of the objects that make up the missle. With the new game object selected on the inspector on the right rename it to Missle. We will want to take advantage of the physic's engine build into Unity. In the Inspector panel click on "Add Component" and select "Physics" and the "Rigid Body". Ensure that its position is at (0,0,0) , that the rotation angles are zeroed out, and that the scale for the X-axis, Y-axis, and Z-axis are all set to 0. Right-click on the newly renamed empty game upject and select "3D Object" and then "Cylinder". This is going to form the body of the missle. Ensure the cylinder's position is (0,0,0). Rotate only the X-axis by 90 degrees. Set the scale for the Y-axis to 2. Right-click on the Missle and create a new sphere. Set it's position to (0,0,2). This will be the head of the missle. Create a cube within the missle. Set it's Z-position to -1.5. Set it's X-scale to 1.5, it's Y-scale to 0.1, and leave the Z-scale at 1. Create another cube and set it's Z-position to -1.5. The x-scale needs to be set to 0.1 and the Y-scale to 1.5. The end result of this construction is pictured below.

 

Repeat the same process for making the plane; create an empty object and add 3D objects to it to make an approximation of a plane. I've listed the child objects that will compose the plane and their settings in the chart below.

ShapePositionRotationScale
XYZXYZXYZ
Cylinder000900012.51
Cube00000050.11
Cube00-20002.50.11
Cube00.75-1.80000.10.71
Sphere00-2.5000111
Sphere002.59000111

If you have created the objects with the above modifications you should end up with something that looks like the following.

The constructed plane model

Adding the Code

In the "Project" panel at the bottom of the screen click on the "Assets" folder. In the pane to the right of this folder right click in a blank area and select "Create" then "C# Script". Name this new script "PlaneBehaviour". Double-click on the script to open it in the editor. Most classes that you create within Unity will inherit from the base class MonoBehaviour. While you might have used method overriding when declaring your own implementation of a derived class within Unity this same task is done by simply having a method with the same name and calling parameter. It doesn't need to actually use the override keyword. Any initialization that you need to do within your class will be done in the Start() method. A method named Update() will be called on regular intervals. Most of our game logic will go here. An empty Start() and Update() will already be present in the new class. There are other methods that could be added.

The PlaneBehaviour class is going to keep track of whether the plane is flying or crashing, how long it has been flying or crashing, and removing it if a certain amount of time passes.

using UnityEngine;
using System;
using System.Collections;

public class PlaneBehaviour : MonoBehaviour {

	public enum PlaneState
	{
		Flying,
		Crashing
	}

	public PlaneState State; 
	public double CrashTime = 5.5d;
	public float TimeToLive = 30;

	// Use this for initialization
	void Start () {
	}
	
	// Update is called once per frame
	void Update () {
		if (State == PlaneState.Crashing) {
			CrashTime -= Time.deltaTime;	
			if (CrashTime < 0)
				Destroy (this.gameObject);	
		} else {
			TimeToLive -= Time.deltaTime;
			if (TimeToLive <= 0)
				Destroy (this.gameObject);
		}
	}
}

For our missle we'll have similar needs. Make another C# Script named MissleBehaviour If a missle comes in contact with a plane it is going to give to the influence of gravity. If by chance it happens to hit another plane it will cause it to crash too. Unity will take care of detecting collisions for us. We only need to decide how to react to them. In addition to the Start()code> and Update() methods we will also have a OnCollisionEnter(Collision col) method. The argument this method receives is information on the object that collided with our missle. We will check to see if the other object were a plane. If it is then the gravity will be allowed to affect the missle's movements. To do this we will get a reference to the GameObject's

using UnityEngine;
using System.Collections;

public class MissleBehavour : MonoBehaviour {
	public float Lifetime = 5;

	// Use this for initialization
	void Start () {
	}
	
	// Update is called once per frame
	void Update () {
		Lifetime -= Time.deltaTime;
		if (Lifetime < 0)
			Destroy (this.gameObject);	
	}

	void OnCollisionEnter(Collision col)
	{
		var plane = col.gameObject.GetComponent<PlaneBehaviour> ();
		if (plane != null) {
			var rigidBody = GetComponent<Rigidbody> ();
			rigidBody.useGravity = true;
		}
	}
}

We want the plane to go into a crashing state when it collides with the missle. Go back to the PlaneBehaviour class and add an OnCollisionEnter method to it. If the plane is already crashing the method should do nothing and immediately return. If it did collide with something we'll check to see what it is. If the other colliding body is a plane do nothing (though the physics engine might cause the plane to start tumbling, but that's fine). If the other body is a missle flag the plane's current state as crashing and allow it to give way to gravity. I'm going to add a sound effect of engines to the plane too. But if the plane starts to crash I want that sound turned off.

void OnCollisionEnter(Collision col)
{
	if (State == PlaneState.Crashing)
		return;
	var missle = col.gameObject.GetComponent<MissleBehavour> ();
	if (missle != null) {
		State = PlaneState.Crashing;
		GetComponent<Rigidbody> ().useGravity = true;
		AudioSource source = gameObject.GetComponent<AudioSource> ();
		if (source != null)
			source.enabled = false;
	}
}

To add a sound effect to my plane I need to find an appropriate sound file on my computer (Unity supports a number of different audio file formats) and drag it from the file explorer on my computer into the assets folder. Selecting my Plane from the project hierarchy I can select Add Component in the inspector , select Audio, and then AudioSource. The assign the AudioSource a sound click-and-drag the sound file from the Assets panel to the AudioClip setting in the inspector.

Generating Planes and Missles

Now that we've define our planes and missles we need to save them in a form that is reusable. In their current form if we were to run the program there would be a single plane and a single missle hanging in the air not doing anything. I want the planes to be able to instantiate copies of either and assign to them different velocities. Each instance needs to maintain it's own state. To do this we need to convert the Plane and the Missle to Prefabs. Click-and-drag the gameObject that serves as the root of the plane from the Hierarchy panel to the Assets panel. Do the same for the Missle. After verifying that they both appear in Assets you can erase them from the scene.

Create a new C# script named PlaneGeneratorBehaviour. This behaviour is going to have two settings that can be changed. One setting is the amount of time to wait between instantiating new planes. The other is going to be the GameObject of the type that this class is going to generate. The other information this class will need to track is the amount of time left until the next plane can be generated. We will also need a random number generator for positioning new planes. The random number generator will be initialized within the class's Start() method.

public float ProductionCooldownTime  = 1;
public GameObject PlanePrefab;
private float _cooldownTimeRemaining ;
System.Random _random;

// Use this for initialization
void Start () {
	_random = new System.Random ();
}

For each Update() cycle the class will decrement the _cooldownTimeRemaining field by the amount of time that has passed since the last time that Update() ran (found in Time.deltaTime). If _cooldownTimeRemaining is above zero there is nothing to do and the method immediately returns. If it becomes less than or equal to zero then it's time to generate a new plane. The field's value is reset to begin the next count down and a copy of the plane is made with Unity's GameObject Instantiate(GameObject source) method.

The plane's relative position is defined in a Vector3 instance named newPosition. It is instantiated with random values that fall within a range. A vertical axis through the plane is defined and the plane is rotated by some number of random degrees to choose it's direction. Using Unity's built in physics engine we will grab a reference to the plane's RigidBody component and apply a force to it in the same direction that the plane is forcing to give it momentum.

using UnityEngine;
using System.Collections;

public class PlaneGeneratorBehaviour : MonoBehaviour {

	public float ProductionCooldownTime  = 1;
	public GameObject PlanePrefab;
	private float _cooldownTimeRemaining ;
	System.Random _random;

	// Use this for initialization
	void Start () {
		_random = new System.Random ();
	}
	
	// Update is called once per frame
	void Update () {
	
		_cooldownTimeRemaining -= Time.deltaTime;
		if (_cooldownTimeRemaining > 0)
			return;
		_cooldownTimeRemaining = ProductionCooldownTime;

		Vector3 newPosition = new Vector3 ();
		newPosition.x = (float) _random.NextDouble () * 20 - 10;
		newPosition.y = (float) _random.NextDouble () * 10 - 5;
		newPosition.z = (float) _random.NextDouble () * 20 - 10;

		var newPlane = Instantiate (PlanePrefab);
		newPlane.transform.position = newPosition;
		Vector3 rotationAxis1 = newPlane.transform.position;
		Vector3 rotationAxis2 = newPlane.transform.position + new Vector3 (0, 1, 0);
		Vector3 newAngle = new Vector3 (0, (float)_random.NextDouble () * 180, 0);
		newPlane.transform.Rotate (newAngle);

		Rigidbody _body = newPlane.GetComponent&ht;Rigidbody> ();
		_body.AddForce (Quaternion.Euler(newAngle) * new Vector3(0,0,90));
	}
}

Save the script. Create an empty game game object in the hierarchy and drag PlaneGeneratorBehaviour from the assets panel the the empty game object that you just created. If you click on the Play button above the scene you will see a preview of the scene in action.

Preview of the scene

We need one more script for user interaction. From the earlier sample you saw that the camera will automatically follow the movements of the user. We will look at the position and orientation of the camera so that when she fires a missle will appear in the same orientation and move forward. This script will make use of the Missle prefab. Two methods of firing will be privided. Either the AirTap gesture can be made or the user will be able to say the word "fire."

The HoloLens supports two modes of voice recognition. One mode is to take dictation and convert what ever the user is saying into text. The other mode has the HoloLens listen for a list of pre-defined phrases and will have it fire an event when it hears one of those phrases. We will be using the latter of those two modes. To use this mode we initiate a KeywordRecognizer. It expects an array of strings that contain the phrases that it will recognize. For our program this array will contain one element with the string "fire". Recognition of this word will result in a flag being set indicating that the user has requested a missle be fired.

For recognition of the air tap gesture we will use the InteractionManager. This class will give information on the movements of the user's hands. The only information of concern to us is that the user has just made the air tap gesture. When detected the flag will be set indicating that the user has requested the firing of the missle.

	public float CoolOffTime = 2;
	public GameObject MisslePrefab;
	private float CoolOffTimeRemaining = 0;
	private bool IsFirePressed = false;

	KeywordRecognizer _keywordRecognizer;

void Start () {
	InteractionManager.SourcePressed += (e) => {
        if(e.pressed)
		    IsFirePressed = true;
	};

	_keywordRecognizer = new KeywordRecognizer (new string[] { "fire" });
	_keywordRecognizer.OnPhraseRecognized += (o) => {
		if(o.text=="fire")
		{
			IsFirePressed = true;
		}
	};
	_keywordRecognizer.Start ();
}

A request to fire doesn't necessarily result in an actual firing. I only wnat to allow missles to be fired once every few moments. If we are still in a cool off period the Update() method will immediately return. If the cool down period is over we check to see if the firing flag is set. If it is set the flag is cleared, the cool down time is reset, and a missle is instantiated. we look at the Camera object and get it's position and rotation and apply that to the missle. Finally the missle's RigidBodycode> component is given a push forward.

void Update () {
	CoolOffTimeRemaining -= Time.deltaTime;
	if (CoolOffTimeRemaining > 0)
		return;
	if ((IsFirePressed)) {
		IsFirePressed = false;
		CoolOffTimeRemaining = CoolOffTime;
		GameObject missleObject = Instantiate (MisslePrefab);

		Camera camera = GetComponent<Camera> ();
		missleObject.transform.position = camera.transform.position + camera.transform.forward * 2.0f;
		missleObject.transform.rotation = camera.transform.rotation;
		missleObject.gameObject.GetComponent<Rigidbody> ().AddForce (camera.transform.forward * 250);
	}
}

This script must be dragged onto the MainCamera component. You will also need to drag the missle for the prefab to the MisslePrefab property that shows up on the component. Once you've done this you can create another Visual Studio build. Before deploying the build in Visual Studio double-click on the file named Package.appmanifest. The program will need permission to use the devices microphone and we request permission using the manifest. When the manifest opens select the Capabilities tab and scroll down to the Microphone capability. Ensure that it is checked.

Importing Assets

3D modelling is a speciality of its own. I won't cover how to use 3D modeling tools within here. There do exists sites that let modellers sell their creations with different usage licenses attached. To demonstrate using external assets I'm going to TurboSquid and am looking for models of missles with a price range between 0.00 USD and 1.00 USD.

Browsing models on Turbosquid

I selected and downloaded a missle model. They are available in a variety of formats. I chose the the OBJ file and dragged it into my Unity Assets. Once a part of assets I dragged it into the scene. Some adjustments to the scale were made until it was a size that made me happy. The MissleBehaviour script is added to it by clicking and dragging. Using the Add Components button in the inspector I added a RigidBody component (under Physics) and set it's mass to 0.24. Also from Physics I added a Capsule Collider. I adjusted the size and position of the Capsule Collider until it just enveloped the missle. When I was happy with how everything looked I named the instance to SmartMissle and dragged it back to Assets to create a new prefab. Finally I replaced the Missle in the PlayerBehaviour script (on the camera object) with the new SmartMissle. Running the program again I see the higher detailed missle in place of the previous one. If you don't have a HoloLens and would like to see what this program looks like I've uploaded a record to YouTube that can be accessed here.

Closing Remarks

There is a lot more to cover on developing for the HoloLens. This includes a lot of concepts and techniques that are a part of Unity and portable for targeting other platforms that Unity supports and a lot of things that are specific to the HoloLens. If you are new to HoloLens development I would suggest three areas into which a deep dive will be helpful. A deeper look into developing UWP applications while not HoloLens specific will be helpful because you will learn concepts, code, and APIs that are applicable to all of the UWP devices including the HoloLens. Digging more into Unity will be helpful in learning how interactions might be built for the HoloLens. And Microsoft also has an area of their site known as the Holographic Acadamy that will have a lot of information on the HoloLens specific Unity functionality such as scanning an environment to know about the other objects within it.

I plan to write a lot more about UWP development including the HoloLens. If you have interest in specific areas of UWP and would like for me to write about them in the future be sure to leave a comment below.

I would also like to give thanks to K.A.P. whose encouragement motivated me to to publish this and a number of other articles that will be published soon.

History

  • 2016 June 16 - Initial Publication

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Joel Ivory Johnson
Software Developer Razorfish
United States United States
I attended Southern Polytechnic State University and earned a Bachelors of Science in Computer Science and later returned to earn a Masters of Science in Software Engineering.

For the past few years I've been providing solutions to clients using Microsoft technologies for web and Windows applications.

While most of my CodeProject.com articles are centered around Windows Phone it is only one of the areas in which I work and one of my interests. I also have interest in mobile development on Android and iPhone. Professionally I work with several Microsoft technologies including SQL Server technologies, Silverlight/WPF, ASP.Net and others. My recreational development interest are centered around Artificial Inteligence especially in the area of machine vision.



Twitter:@J2iNet


You may also be interested in...

Pro
Pro

Comments and Discussions

 
-- There are no messages in this forum --
Permalink | Advertise | Privacy | Terms of Use | Mobile
Web02 | 2.8.170217.1 | Last Updated 16 Jun 2016
Article Copyright 2016 by Joel Ivory Johnson
Everything else Copyright © CodeProject, 1999-2017
Layout: fixed | fluid