Click here to Skip to main content
15,314,965 members
Articles / Desktop Programming / WPF
Posted 22 Mar 2009


217 bookmarked

Perceptor: An artificially intelligent guided navigation system for WPF

Rate me:
Please Sign up or sign in to vote.
4.95/5 (126 votes)
22 Mar 2009LGPL312 min read
Knowledge acquired by a neural network is used to predict the element to which a user may intend to navigate.

Perceptor logo



Perceptor is an artificially intelligent guided navigation system for WPF. Perceptor tracks a user's behaviour while he or she interacts with the user interface. Changes to the DataContext of a host control indicate user navigation behaviour, and induce the training of a neural network. Knowledge acquired by the neural network is used to predict the IInputElements to which a user may intend to navigate. This accelerates interface interaction, improves user efficiency, and allows for dynamic and evolving business rule creation.


Last year (2008), I was asked to implement some business rules for an ASP.NET application. Part of this application was designed to allow high volume data entry, and used a tabbed interface with which staff would navigate and manually validate and amend information. The rules I implemented were designed to streamline this process. At the time it struck me that hardwiring the behaviour of a user interface, based on business procedures, was too rigid. The way people work changes, and the way an application is used varies from user to user. Moreover, refinement of such rules overtime leads to increased maintenance costs, and to the retraining of staff to handle new improved application behaviour.

I envisioned a system where we could let the users define the behaviour by using it. A system that could learn how to respond by itself. To this end, this article and the accompanying code are provided as a proof of concept.

A Neural Network Driven Interface

Even though we have at our disposal terrific technologies such as WPF to build dynamic and highly reactive interfaces, most interfaces, albeit rich, are in themselves not smart; they employ not even a modicum of AI when responding to user interaction. Perhaps one may liken intelligent interfaces to the flying car; they are both much easier to do in sci-fi, are both the next step in the evolution of the technology, and both take a lot of refinement to get right.

My wish is that I want the interface to know what I want, and to learn about me. But I also want it to do this in a way that doesn't bother me by making poor assumptions, and that is probably one of the biggest challenges. If running out of petrol requires a crash landing, then I'd prefer to remain land bound.

We've all seen how artificial neural networks (ANN) can be used to do things such as facial and optical character recognition. Indeed they work well at pattern recognition where there exists well defined training data, and it appears that we are able to leverage the same technology, albeit in a different manner, to recognize user behaviour as well. There are, however, a number of challenges, such as dealing with temporal based progressive training, because training data is not predefined; the network is trained as we go. An advantage of using an ANN is that we are able to provide predictions for situations that haven't been seen yet.

Perceptor uses a three layered neural network, which becomes associated with a host ContainerControl and a DataContext type. In this article we will not be looking at neural networks, as there are already some very good articles here on CP. I recommend taking a look at Sacha Barber's series of articles on the topic if you are new to neural networks. I will mention though that during experimentation it was realised that a future enhancement might be a Long Short Term Memory (LSTM) implementation. In this prototype we retrain the neural network repeatedly with all inputs in order to learn progressively.

Building a flying car with WPF

Perceptor trains a neural network using the state of the DataContext of a host control as input, and a list of IInputControls id's as output. Prediction data, and the serialized neural network is saved locally when offline, or remotely on the server when online.

Perceptor overview

Figure: Perceptor system overview.

Perceptor monitors the DataContext of a host control for changes. By doing this, rather than tracking only the state of the controls, we are able to gather more information about how the user is affecting the state of the system. We are able to make inferences based on not only user behaviour but also system behaviour, as the system is capable of modifying the DataContext as a result of an internal or external event. Put another way, if we were to merely track the controls, we would not be able to associate properties that didn't have a visual representation in the interface. By tracking the DataContext we can analyse the structure more deeply, and we can even enhance how we generate the input for our neural network. We can, in effect, drill down into the DataContext to improve the granularity and the quality of Perceptor's predictions.

Input for our neural network is generated by the NeuralInputGenerator. This takes an object exposed by the DataContext property of a control, and converts it into a double[], which can then be used to train or pulse our neural network.

/// <summary>
/// Generates the input for a neural network.
/// </summary>
/// <param name="instance">The object instance that is analysed
/// in order to produce the result.</param>
/// <param name="newInstance">if <c>true</c> 
/// then this is the first time the neural network 
/// has been trained in this session.</param>
/// <returns>The input stimulus for a neural network.</returns>
public double[] GenerateInput(object instance, bool newInstance)
	ArgumentValidator.AssertNotNull(instance, "instance");
	var clientType = instance.GetType();
	if (lastKnownType == null || lastKnownType != clientType)
			if (lastKnownType == null || lastKnownType != clientType)

	var resultSize = propertyCount + 1;
	var doubles = new double[resultSize];
	/* The first index is reserved as an indicator 
	 * for whether this is a new instance. */
	doubles[0] = newInstance ? trueLevel : falseLevel;

	for (int i = 1; i < resultSize; i++)
		var info = propertyInfos.Values[i - 1];
		if (info.PropertyType == typeof(string))
			var propertyValue = info.GetValue(instance, null);
			doubles[i] = propertyValue != null ? trueLevel : falseLevel;
		else if (info.PropertyType == typeof(bool))
			var propertyValue = info.GetValue(instance, null);
			doubles[i] = (bool)propertyValue ? trueLevel : falseLevel;
		else if (!typeof(ValueType).IsAssignableFrom(info.PropertyType)) 
		{	/* Not a value type. */
			var propertyValue = info.GetValue(instance, null);
			doubles[i] = propertyValue != null ? trueLevel : falseLevel;

	return doubles;

Here we examine the provided instance's properties and, using some rules based on whether a property is populated etc., populate the double[].

The input generated by this method provides us with a fingerprint of our DataContext, and indeed a discreet representation of the interface model. There is an opportunity to refine the NeuralInputGenerator, to increase its recognition of known field types, and even add child object analysis.


The ADO.NET entity framework is used to access a table of prediction data associated with a user and a control id. When Perceptor is attached to a host control, it will attempt to retrieve existing prediction data for the user and the particular host control id. It does this by, firstly checking if the host control has assigned the Perceptor.PersistenceProvider attached property. If so, Perceptor will attempt to use the provider for persistence. This extensibility point for persisting prediction data can be utilised by implementing the IPersistPredictionData interface.

When the window of host control is closing, Perceptor will attempt to save its prediction data. In the sample application we associate the prediction data with a user id. The following excerpt from the sample demonstrates how this can be done.

public void SavePredictionData(LearningData predictionData)
	log.Debug("Attempting to save prediction data." + predictionData);

	if (Testing)
	var learningUIService = ChannelManagerSingleton.Instance.GetChannel<ILearningUIService>();
	Debug.Assert(learningUIService != null);
	learningUIService.SavePredictionData(testUserId, predictionData);

Sample overview

The download includes a sample application, which is meant to demonstrate how Perceptor can be used to guide the user to input elements. It displays a list of employee names, and each when selected populates the Employee Details tab and Boss panel of the application.

Perceptor demo screen shot showing Employee Selection tab
Figure: Opening screen shot of sample application.

Each time a field is modified, causing a modification to the DataContext, the ANN is pulsed, and a candidate input prediction is taken. If the prediction's confidence level is above a predefined threshold, the user is presented with the option to navigate directly to the predicted input control.

An overview of Perceptor's learning process is illustrated below.

Learning Phase

Learning Phase
Figure: Learning Phase

Once Perceptor has acquired enough knowledge to make confident predictions, it can be used to navigate to predicted elements.

Predictive Phase

Learning Phase
Figure: Predictive Phase

A feature of Perceptor is automatic expansion when the predicted element happens to reside in an Expander. This expansion occurs as soon as a confident prediction is detected.

In the sample application we can see how a confident prediction of an element is highlighted.

Perceptor demo screen shot showing Employee Details tab
Figure: Perceptor guides the user to the next predicted element.

Shifting Control Focus

Deterministic focus shifting in WPF can be tricky. When we call Focus() on a UIElement there is no guarantee that the element will gain focus. That is why this method returns true if it succeeds. In Perceptor we use the FocusForcer class to move focus within the user interface. UIElement.Focus() returns false if either IsEnabled, IsVisible or Focusable are false, and true if focus is shifted. Yet when performed on the same thread that is handling e.g. PreviewLostKeyboardFocus of the currently focused element ϑ, the call will return false as ϑ won't be ready to relinquish focus. Thus we use our FocusForcer and an extension method to perform the change of focus in the background if required. The following excerpt shows how FocusForcer attempts to focus the specified element.

static void FocusControl(UIElement element)
	ArgumentValidator.AssertNotNull(element, "element");

	var focusResult = element.Focus();

	if (focusResult)
	element.Dispatcher.Invoke(DispatcherPriority.Background, (Action)delegate
			focusResult = element.Focus();

			if (!focusResult)
				focusResult = element.Focus();

			if (!focusResult)
				log.Warn(string.Format("Unable to focus UIElement {0} " 
					+ "IsVisible: {1}, Focusable: {2}, Enabled: {3}",
					element, element.IsVisible, element.Focusable, 

When we initialize Perceptor we create an output neuron in the neural network for each IInputElement of the container control.

/// <summary>
/// Initializes Perceptor from a container element. 
/// It is the <code>DataContext</code> of this element
/// that is monitored for changes.
/// </summary>
/// <param name="host">The parent element.</param>
void InitializeFromHost(FrameworkElement host)
	ArgumentValidator.AssertNotNull(host, "host"); = host;

	host.DataContextChanged += OnHostDataContextChanged;

	host.CommandBindings.Add(new CommandBinding(
		NavigateForward, OnNavigateForward, OnCanNavigateForward));
	host.CommandBindings.Add(new CommandBinding(
		NavigateBackward, OnNavigateBackward, OnCanNavigateBackward));
	host.CommandBindings.Add(new CommandBinding(
		ResetLearning, OnResetLearning, OnCanResetLearning));

	outputNeuronCount = 0;

	/* Each IInputElement in the user interface 
	 * gets an output neuron in the neural network. */
	var inputElements = host.GetChildrenOfType<IInputElement>();
	foreach (var inputElement in inputElements)
		inputElementIndexes.Add(inputElement, outputNeuronCount);
		inputElement.PreviewLostKeyboardFocus -= OnInputElementPreviewLostKeyboardFocus;
		inputElement.PreviewLostKeyboardFocus += OnInputElementPreviewLostKeyboardFocus;

	var window = host.GetWindow();
	if (window != null)
		/* We shall save the network when the window closes. */
		window.Closing += window_Closing;

Consuming Perceptor

In order to have Perceptor monitor any container control, we use attached properties as shown in the following example.

<TabControl Name="tabControl_Main" Grid.Row="2" VerticalAlignment="Stretch" SelectedIndex="0" 
		LearningUI:Perceptor.PersistenceProvider="{Binding ElementName=rootElement}" />

The PersistenceProvider property is not necessary. But it exists so that we can customize how the user's prediction data is saved between sessions. In the example download we use the window to transport the prediction data to and from the ILearningUIService WCF service. As this it is a hybrid smart client, Perceptor allows the user to work offline if the service is unavailable, and will fall back on persisting the prediction data to the user's local file system if the PersistenceProvider is unavailable or raises an Exception. The following excerpt shows the IPersistPredictionData interface.

/// <summary>
/// Provides persistence services for Perceptor.
/// </summary>
public interface IPersistPredictionData
	/// <summary>
	/// Saves the prediction data so that it may be loaded 
	/// via <see cref="LoadPredictionData"/>.
	/// </summary>
	/// <param name="predictionData">The prediction data.</param>
	void SavePredictionData(PerceptorData predictionData);

	/// <summary>
	/// Loads the prediction data that has been persisted 
	/// via <see cref="SavePredictionData"/>.
	/// </summary>
	/// <param name="id">The unique id of the prediction data.</param>
	/// <returns>The PerceptorData with the matching id.</returns>
	PerceptorData LoadPredictionData(string id);

Perceptor exposes three routed commands, and they are:

  • NavigateForward
    Is used to change focus to the next predicted UIElement.
  • NavigateBackward
    Is used to return to the UIElement that previously had focus. When NavigateForward is performed, the current element with focus is placed on a stack.
  • ResetLearning
    Is used to recreate the neural network, so that previous learning is forgotten.

Service Channel Management

In order to manage channels efficiently I have implemented a class called ChannelManagerSingleton. In a previous article I wrote a little about the Silverlight incarnation, so I won't restate things here. I will, however, mention that since then I have produced a WPF version (included in the download) with support for duplex services. Duplex services are cached using the callback instance and service type combination as a unique key. In this way, we are still able to have centralised management of services, even though a callback instance is involved. The following excerpt shows the GetDuplexChannel method in full, and how duplex channels are created and cached.

public TChannel GetDuplexChannel<TChannel>(object callbackInstance)
	if (callbackInstance == null)
		throw new ArgumentNullException("callbackInstance");

	Type serviceType = typeof(TChannel);
	object service;
	var key = new DuplexChannelKey { ServiceType = serviceType, CallBackInstance = callbackInstance };

		if (!duplexChannels.TryGetValue(key, out service))
		{	/* Value not in cache, therefore we create it. */
				var context = new InstanceContext(callbackInstance);
				/* We don't cache the factory as it contains a list of channels 
				 * that aren't removed if a fault occurs. */
				var channelFactory = new DuplexChannelFactory<TChannel>(context, "*");

				service = channelFactory.CreateChannel();
				var communicationObject = (ICommunicationObject)service;
				communicationObject.Faulted += OnDuplexChannelFaulted;
				duplexChannels.Add(key, service);
				ConnectIfClientService(service, serviceType);

	return (TChannel)service;

Unit Testing WPF with White

Black-box testing can compliment your existing unit tests. One advantage of black-box testing that I quite like is that we are testing functionality within a real running environment, and interdependencies are also tested. Another advantage is that tests remain independent of any implementation. For example, during the development of Perceptor I changed much of the implementation, yet I was able to leave my black-box tests alone. In the past I have used NUnitForms for black-box testing. This was my first foray into black-box testing in WPF, and I needed to find another tool because NUnitForms doesn't support WPF. So I decided to give the White project a try. White uses UIAutomation, so can be used with both Windows Forms and WPF applications.

Getting started with White merely involves referencing the White assemblies and starting an instance of our application in a unit test, as the following excerpt shows.

public void TestInitialize()
	var startInfo = new ProcessStartInfo("DanielVaughan.LearningUI.Wpf.exe", 

	application = Core.Application.Launch(startInfo);
	window = application.GetWindow(DanielVaughan.LearningUI.Window_Main.WindowTitle, 

In order to have Perceptor not attempt to use the WCF during the test, we use an argument to let it know that it is being black-box tested. Once we start the application we use White to get a testable representation of the application.

The test method uses the window instance to locate and manipulate UIElements. Among other things, we are able to set textbox values, click buttons, and switch tabs. It appears that some elements are not yet supported, such as the Expander control. I was using the rather old release version, and others may be better of acquiring and building the source via a subversion client.

public void WindowShouldLearnFromNavigation()
	textBox_ApplicationSearch = window.Get<TextBox>("textBox_Search");
	var resetButton = window.Get<Button>("Button_ResetLearning");
	var tabPageSelection = window.Get<TabPage>("TabItem_SelectEmployee");
	var forwardButton = window.Get<Button>("Button_Forward");
	Assert.IsTrue(phoneTextBox.IsFocussed, "phoneTextBox should be focused.");

Another nicety of black-box testing is that we don't need to worry about creating mocks. There are, of course, disadvantages to black-box testing compared to traditional white-box testing. But there's no reason why we can't use both!

Test results for unit tests
Figure: Test results for unit tests.

Possible Applications

A version of Perceptor could be used in Visual Studio to present the appropriate tool window when a particular designer, with a particular state, is selected. Perceptor could prove especially useful in areas such as mobile phone interfaces, where the user's ability to interact with the interface is inhibited by limited physical input controls. Likewise, people with certain disabilities, who have a limited capacity to manipulate the user interface may also benefit.

Perhaps this kind of predictive UI technology could be classified as a fifth-generation user interface technology (5GUI). This suggestion is based on the way in which programming language classification, in particular 5GL, is defined. The following is an excerpt from the Wikipedia entry.

While fourth-generation programming languages are designed to build specific programs, fifth-generation languages are designed to make the computer solve a given problem without the programmer. This way, the programmer only needs to worry about what problems need to be solved and what conditions need to be met, without worrying about how to implement a routine or algorithm to solve them.

Over time, Perceptor learns how the user interface should behave, removing the need for programmer intervention. Thus the classification 5GUI.


In this article we have seen how Perceptor tracks a user's behaviour while he or she interacts with the user interface, and induces the training of a neural network. We also saw how Perceptor is able to save its prediction data, either locally or remotely. Knowledge acquired by the neural network is used to predict the user's navigation behaviour. This allows for a dynamic and evolving interface not encumbered by rigid, predefined business rules.

Through the application of AI to user interfaces we have a tremendous opportunity to increase the usability of our software. The burden of hardwiring behaviour directly into our user interfaces can be reduced, and rules can be dynamic and refined over time. By combining the visual appeal and richness afforded to us by technologies such as WPF, we are able to move beyond merely reactive UIs, to provide a new level of user experience.

I hope you find this project useful. If so, then I'd appreciate it if you would rate it and/or leave feedback below. This will help me to make my next article better.

Future Enhancements

  • Modify the neural network to use Long Short Term Memory or an alternative progressive recurrent learning strategy.


March 2009

  • Initial release.


This article, along with any associated source code and files, is licensed under The GNU Lesser General Public License (LGPLv3)


About the Author

Daniel Vaughan
Engineer Microsoft
United States United States
Daniel Vaughan is a Senior Software Engineer at Microsoft.

Previously Daniel was a nine-time Microsoft MVP and co-founder of Outcoder, a Swiss software and consulting company.

Daniel is the author of Windows Phone 8 Unleashed and Windows Phone 7.5 Unleashed, both published by SAMS.

Daniel is the developer behind several acclaimed mobile apps including Surfy Browser for Android and Windows Phone. Daniel is the creator of a number of popular open-source projects, most notably Codon.

Would you like Daniel to bring value to your organisation? Please contact

Blog | Twitter

Xamarin Experts
Windows 10 Experts

Comments and Discussions

GeneralOneClick Installation Pin
Albert Muthiga7-Jun-09 21:01
MemberAlbert Muthiga7-Jun-09 21:01 
GeneralRe: OneClick Installation Pin
Daniel Vaughan10-Jun-09 3:30
MemberDaniel Vaughan10-Jun-09 3:30 
GeneralGreat Work! Pin
Albert Muthiga7-Jun-09 7:49
MemberAlbert Muthiga7-Jun-09 7:49 
GeneralRe: Great Work! Pin
Daniel Vaughan7-Jun-09 10:45
MemberDaniel Vaughan7-Jun-09 10:45 
GeneralRe: Great Work! Pin
Daniel Vaughan7-Jun-09 13:18
MemberDaniel Vaughan7-Jun-09 13:18 
GeneralVery smart Pin
Parchet20-May-09 0:46
MemberParchet20-May-09 0:46 
GeneralRe: Very smart Pin
Daniel Vaughan20-May-09 7:12
MemberDaniel Vaughan20-May-09 7:12 
GeneralVery Nice Pin
Paul Conrad19-May-09 10:33
professionalPaul Conrad19-May-09 10:33 
GeneralRe: Very Nice Pin
Daniel Vaughan19-May-09 10:45
MemberDaniel Vaughan19-May-09 10:45 
QuestionFascinating - but is it feasible? Pin
Colin Eberhardt29-Apr-09 3:53
MemberColin Eberhardt29-Apr-09 3:53 
AnswerRe: Fascinating - but is it feasible? Pin
Daniel Vaughan29-Apr-09 10:31
MemberDaniel Vaughan29-Apr-09 10:31 
GeneralGood Job... Pin
Pritesh O27-Apr-09 10:27
MemberPritesh O27-Apr-09 10:27 
GeneralRe: Good Job... Pin
Daniel Vaughan27-Apr-09 10:35
MemberDaniel Vaughan27-Apr-09 10:35 
GeneralThank you Pin
zhujinlong1984091316-Apr-09 20:31
Memberzhujinlong1984091316-Apr-09 20:31 
GeneralRe: Thank you Pin
Daniel Vaughan20-Apr-09 21:56
MemberDaniel Vaughan20-Apr-09 21:56 
GeneralGreat! Pin
andrewbeh12-Apr-09 21:05
Memberandrewbeh12-Apr-09 21:05 
GeneralRe: Great! Pin
Daniel Vaughan12-Apr-09 23:17
MemberDaniel Vaughan12-Apr-09 23:17 
GeneralFantastic Daniel Pin
Abhijit Jana12-Apr-09 0:56
professionalAbhijit Jana12-Apr-09 0:56 
GeneralRe: Fantastic Daniel Pin
Daniel Vaughan12-Apr-09 1:05
MemberDaniel Vaughan12-Apr-09 1:05 
GeneralGreat job! Pin
Dr.Luiji10-Apr-09 10:15
professionalDr.Luiji10-Apr-09 10:15 
GeneralRe: Great job! Pin
Daniel Vaughan10-Apr-09 10:27
MemberDaniel Vaughan10-Apr-09 10:27 
GeneralSame idea, larger target, lesser focus Pin
Dheeraj_Kumar6-Apr-09 17:52
MemberDheeraj_Kumar6-Apr-09 17:52 
GeneralRe: Same idea, larger target, lesser focus Pin
Daniel Vaughan6-Apr-09 23:16
MemberDaniel Vaughan6-Apr-09 23:16 
GeneralRe: Same idea, larger target, lesser focus Pin
Crisboot9-Apr-09 9:41
MemberCrisboot9-Apr-09 9:41 
GeneralRe: Same idea, larger target, lesser focus Pin
Daniel Vaughan10-Apr-09 6:30
MemberDaniel Vaughan10-Apr-09 6:30 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.