Consuming an ML.NET model in UWP

In this article we’ll show step by step how to consume a Machine Learning model from ML.NET in a UWP application. In a small sample app we simulate a feedback form with a rating indicator and a text box. The app uses an existing Sentiment Analysis model to validate the assigned number of stars against the assumed (predicted) sentiment of the comments. Here’s how it looks like:

FourStarsButNegative

A typical Machine Learning scenario involves reading and manipulating training data, building and evaluating the model, and consuming the model. ML.NET supports all of these steps, as we described in our recent articles. Not all of these steps need to be hosted in the same app: you may have a central (or even external) set of applications to create the models, and another set of apps that consume these.

This article describes the latter type of apps, where the model is a ZIP file and nothing but a ZIP file.

MachineLearningModel_5

We started the project with downloading this ZIP file from the ML.NET samples on GitHub. The file contains a serialized version of the Sentiment Analysis model that was demonstrated in this Build 2019 session:

MsBuildSample

It takes an English text as input, and predicts its sentiment -positive or negative- as a probability. Data-science-wise this is a ‘binary classification by regression’ solution, but as a model consumer you don’t need to know all of this (although it definitely helps to have some basic knowledge).

Configuring the project

Here’s the solution setup. We added the ZIP file as Content in the Assets folder, and added the Microsoft.ML NuGet Package:

Configuration

This NuGet package contains most but not all of the algorithms. You may need to add more packages to bring your particular model to life (e.g. when it uses Fast Tree or Matrix Factorization).

NuGets

The model’s learning algorithm determines the output schema and the required NuGet package. The Sentiment Analysis model was built around a SdcaLogisticRegressionBinaryTrainer. Its documentation has all the necessary links:

TrainerDocumentation

Deserializing the model

In the code, we first create an MLContext as context for the other calls, and we retrieve the physical path of the model file. Then we call Model.Load() to inflate the model:

var mlContext = new MLContext(seed: null);

var appInstalledFolder = Windows.ApplicationModel.Package.Current.InstalledLocation;
var assets = await appInstalledFolder.GetFolderAsync("Assets");
var file = await assets.GetFileAsync("sentiment_model.zip");
var filePath = file.Path;

var model = mlContext.Model.Load(
    filePath: filePath,
    inputSchema: out _);

The original model file was saved without schema information, so we used the C# 7 discard (‘_’) to avoid wasting an unused local variable to the inputSchema parameter.

Defining the schema

If the schema is not persisted with the model, then you need to dive in its original source code or documentation. Here are the original input and output classes for the Sentiment Analysis model:

public class SentimentData
{
    public string SentimentText;
    [ColumnName("Label")]
    public bool Sentiment;
}

public class SentimentPrediction : SentimentData
{
    [ColumnName("PredictedLabel")]
    public bool Prediction { get; set; }
    public float Probability { get; set; }
    public float Score { get; set; }
}

Since our app will not be used to train or evaluate the model, we can get away with a simplified version of this schema. There’s no need for the TextLoader attributes or a Label column:

public class SentimentData
{
    public string SentimentText;
}

public class SentimentPrediction
{
    public bool PredictedLabel { get; set; }

    public float Probability { get; set; }

    public float Score { get; set; }

    public string SentimentAsText => PredictedLabel ? "positive" : "negative";
}

We recommend model builders to be developer friendly, and save the input schema together with the model – it’s a parameter in the Model.Save() method. This allows the consumer of the model to inspect it when loading the model. Here’s how this looks like (screenshots from another sample app):

InputSchema

When you have the input schema, you can discover or verify the output schema by creating a strongly typed IDataView and passing it to GetOutputSchema():

// Double check the output schema.
var dataView = mlContext.Data.LoadFromEnumerable<SentimentData>(new List<SentimentData>());
var outputSchema = model.GetOutputSchema(dataView.Schema);

Here’s how that looks like when debugging:

OutputSchema

Inference time

Once the input and output classes are defined, we can turn the deserialized model into a strongly typed prediction engine with a call to CreatePredictionEngine():

private PredictionEngine<SentimentData, SentimentPrediction> _engine;
_engine = mlContext.Model.CreatePredictionEngine<SentimentData, SentimentPrediction>(model);

The Predict() call takes the input (the piece of text) and runs the entire pipeline –which is a black box to the consumer- to return the result:

var result = _engine.Predict(new SentimentData { SentimentText = RatingText.Text });
ResultText.Text = $"With a score of {result.Score} " +
                  $"we are {result.Probability * 100}% " +
                  $"sure that the tone of your comment is {result.SentimentAsText}.";

The sample app compares the sentiment (positive or negative) of the text in the text box to the number of stars in the rating indicator. If these correspond, then we accept the feedback form:

FiveStarsAndPositive

When the sentiment does not correspond to the number of stars, we treat the feedback as suspicious:

OneStarButPositive

A ZIP file, an input and an output class, and five lines of code – that’s all you need for using an existing ML.NET model in your app. So far, so good.

A Word of Warning

We had two reasons to place this code in a separate app instead of in our larger UWP-ML.NET-MVVM sample app. The first reason is to show how simple and easy it is to embed and consume an ML.NET Machine Learning model in your app.

The second reason is that we needed a simple sentinel to keep an eye on the evolution of the whole “UWP-ML.NET-.NET Core-.NET Native” ecosystem. The sample app runs fine in debug mode, but in release mode it crashes. Both the creation of the MLContext and the deserialization of the model seem successful, but the creation of the prediction engine fails:

ReleaseRun

ML.NET does not fully support UWP (yet)

The UWP platform targets several devices, each with very different hardware capabilities. The fast startup for .NET applications, their small memory footprint, as well as their independence from other apps and configurations is achieved by relying on .NET Native. A release build in Visual Studio creates a native app by compiling it together with its dependencies ahead of time and stripping off al the unused code. The runtime does not come with a just-in-time compiler, so you have to be careful (i.e. you have to provide the proper runtime directives) with things like reflection and deserialization (which happen to be two popular techniques within ML.NET). On top of that, there’s the sandbox that prohibits calls to some of the Win32 API’s from within a UWP app.

When compiling the small sample app using the .NET Native Tool-Chain you’ll see that it issues warnings for some of the internal utilities projects:

ReleaseBuild

In practice, these warnings indicate that ‘some things may break at runtime’ and it is not something that we can work around as a developer.

The good news is that these problems are known, an given a pretty high priority:

UwpCompatibility

The bad news is that it’s a multi team effort, involving UWP, ML.NET,.NET Core as well as .NET Native. The teams have come a long way already – a few months ago nothing even worked at debug time (that’s against the CoreCLR). But it’s extremely hard to set a deadline or a target date for full compatibility.

In one of our previous articles, we mentioned WinML as an alternative for consuming Machine Learning models. It still is, except that WinML requires the models to be persisted in the ONNX format and … export to ONNX from ML.NET is currently locked down.

In the meantime the UWP platform itself is heading to a new future with ahead-of-time compilation in .NET Core and a less restrictive sandbox. So eventually all the puzzle pieces will fall together. We just don’t know how and when. Anyway, each time that one of the components of the ecosystem is upgraded, we’ll upgrade the sample app and see what happens…

The Code

All the code for embedding and consuming an ML.NET model in a UWP app, is in the code snippets in this article. If you want to take the sample app for a spin: it lives here on GitHub.

Enjoy!

Leave a comment