Click here to Skip to main content
15,889,872 members
Articles / Programming Languages / C++
Technical Blog

Using Microsoft Cognitive Services with Xamarin Forms

Rate me:
Please Sign up or sign in to vote.
5.00/5 (1 vote)
3 Sep 2016CPOL5 min read 8.2K  
A few days back I saw this awesome video on Channel 9 by James Montemagno and I learned about Microsoft Cognitive Services. The services are [...]

A few days back I saw this awesome video on Channel 9 by James Montemagno and I learned about Microsoft Cognitive Services. The services are pretty awesome and another awesome thing is that currently they are free πŸ™‚ (up to certain number of calls). The sample application which James showed in video is in Xamarin Native so I thought let me build the same application in Xamarin.Forms.

In this blog we will create a Xamarin.Forms mobile application which will use following APIs of Microsoft Cognitive Services:

James sample app uses 2 services, I added one more as some value add πŸ™‚

Before starting to code get Microsoft Cognitive Services API Keys (These steps may change if Microsoft updates it’s site) :

  • Visit the site of Microsoft Cognitive Services
  • Click on β€˜Get Started for Free’ button
    492016MSCogServiceGetStart
  • Click on β€˜Let’s Go’ button on next page which appears.
    492016MSCogServiceLetsGo
  • Sign In using your Microsoft ID account (if you have one) or Create a new one
    492016MSCogServiceSignInUp
  • Once signed Up you will be presented with following screen which will contain all you cognitive service API keys which will be used to access these services and Microsoft will use them to track your number of service calls.
    492016MSCogServiceKeys

Now as usual create a new Xamarin Forms project and then follow these steps:

As the code of the application is a bit longer, I will only be sharing the main code’s snippets, Because of which you won’t be able to execute the application if you just follow the steps and copy the code from this blog. Rather I would suggest you to read this blog as explanation of the associated code saved on my Github

  • The best part of using Microsoft Cognitive Services is that they have nuget packages specially targeted to use them. So the first step is to install the required packages, which are :
    • Newtonsoft.Json
    • Microsoft.Net.Http
    • 492016MSNetHttp

    • Microsoft.ProjectOxford.Common
    • 492016MSOxfordCommon

    • Microsoft.ProjectOxford.Emotion
    • 492016MSOxfordEmotion

    • Microsoft.ProjectOxford.Face
    • 492016MSOxfordFace

    To add these right click on the solution file and select β€˜Manage Nuget Packages for solutionβ€˜ option from popup context menu.

  • Since I am reusing the code of James sample so the project structure will be same, so like his project this application will also have MVVM architecture
  • Create a folder named β€˜Modelβ€˜ then add a class file named β€˜ImageResult.csβ€˜ this class is used to show the Images got as result from the Bing Image Search. The reason we are keeping this Model class in separate folder is because this is application specific model not service specific model.
  • Create a Folder named β€˜Servicesβ€˜, this folder will be used to stored the code related to web service calling etc
  • Create another folder inside the β€˜Servicesβ€˜ folder named β€˜BingSearchβ€˜, as the application is invoking Bing Image search API directly so this folder will be used to store the class/model files which will be used to interact with the it and manage it’s results. The Classes inside this folder are :
    • Image
    • SearchResult
    • Suggestion
    • Thumbnail
  • Create a class inside β€˜Servicesβ€˜ folder named β€˜EmotionServiceβ€˜, this will contain following code which will be used to invoke and utilize the Microsoft Emotion Service API
    using Microsoft.ProjectOxford.Emotion;
    using Microsoft.ProjectOxford.Emotion.Contract;
    using System;
    using System.IO;
    using System.Linq;
    using System.Threading.Tasks;
    
    namespace MSCogServiceEx.Services
    {
        public class EmotionService
        {
            private static async Task<Emotion[]> GetHappinessAsync(Stream stream)
            {
                var emotionClient = new EmotionServiceClient(CognitiveServicesKeys.Emotion);
    
                var emotionResults = await emotionClient.RecognizeAsync(stream);
    
                if (emotionResults == null || emotionResults.Count() == 0)
                {
                    throw new Exception("Can't detect face");
                }
    
                return emotionResults;
            }
    
            //Average happiness calculation in case of multiple people
            public static async Task<float> GetAverageHappinessScoreAsync(Stream stream)
            {
                Emotion[] emotionResults = await GetHappinessAsync(stream);
    
                float score = 0;
                foreach (var emotionResult in emotionResults)
                {
                    score = score + emotionResult.Scores.Happiness;
                }
    
                return score / emotionResults.Count();
            }
    
            public static string GetHappinessMessage(float score)
            {
                score = score * 100;
                double result = Math.Round(score, 2);
    
                if (score >= 50)
                    return result + " % :-)";
                else
                    return result + "% :-(";
            }
        }
    }

    As you can see from above code we are using β€˜GetHappinessAsyncβ€˜ method to get the emotion results of the image stream we pass to the Emotion Service and then use rest to methods to show the emotions of the people present in the image.

  • Create the class named β€˜FaceServiceβ€˜ inside β€˜Servicesβ€˜ folder to contain the following code for interacting with Microsoft Face API:

    using Microsoft.ProjectOxford.Face;
    using Microsoft.ProjectOxford.Face.Contract;
    using System;
    using System.IO;
    using System.Linq;
    using System.Threading.Tasks;
    
    namespace MSCogServiceEx.Services
    {
        public class FaceService
        {
            public static async Task<FaceRectangle[]> UploadAndDetectFaces(Stream stream)
            {
                var fClient = new FaceServiceClient(CognitiveServicesKeys.FaceKey);
    
                var faces = await fClient.DetectAsync(stream);
                var faceRects = faces.Select(face => face.FaceRectangle);
    
                if (faceRects == null || faceRects.Count() == 0)
                {
                    throw new Exception("Can't detect the faces");
                }
                return faceRects.ToArray(); 
            }
        }
    }

    In the above code we are using Face service to return back an arrary of facerect object (which is an class containing rectangular co-ordinates of the faces present in picture). We will show the number of faces present in the image by the length of this array.

  • Create another class named β€˜CognitiveServicesKeysβ€˜ inside β€˜Services’ folder. This class is used to store the Keys to pass along with service call (Microsoft will use this key to identify your API account and count the number of calls). As the number of calls are limited so I have not shared my key you can get one for yourself by following the steps mentioned in the beginning of blog.
  • Create a new folder named β€˜ViewModelβ€˜ as we are following the MVVM design this folder will be used to store the View-Models which will interact with the UI
  • Create a class named β€˜ImageSearchViewModelβ€˜ implementing β€˜INotifyPropertyChangedβ€˜ Interface inside β€˜ViewModelβ€˜ folder. The main method of this class which invokes Microsoft Bing API is as follows :
    public async Task SearchForImages()
            {
                if (IsBusy)
                    return;
    
                IsBusy = true;
                //Bing Image API
                var url = $"https://api.cognitive.microsoft.com/bing/v5.0/images/" +
                          $"search?q={searchString}" +
                          $"&count=20&offset=0&mkt=en-us&safeSearch=Strict";
    
                try
                {
                    using (var client = new HttpClient())
                    {
                        client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", CognitiveServicesKeys.BingSearch);
    
                        var json = await client.GetStringAsync(url);
    
                        var result = JsonConvert.DeserializeObject<SearchResult>(json);
    
                        Images = result.Images.Select(i => new ImageResult
                        {
                            ContextLink = i.HostPageUrl,
                            FileFormat = i.EncodingFormat,
                            ImageLink = i.ContentUrl,
                            ThumbnailLink = i.ThumbnailUrl,
                            Title = i.Name
                        }).ToList();
                      
                    }
                }
                catch (Exception ex)
                {
                    //  ("Unable to query images: " + ex.Message);               
                }
                finally
                {
                    IsBusy = false;
                }
            }
  • Create another class named β€˜EmotionViewModelβ€˜ implementing β€˜INotifyPropertyChangedβ€˜ Interface inside β€˜ViewModelβ€˜ folder for managing Emotion Service UI View.
  • Create another class named β€˜FaceViewModelβ€˜ implementing β€˜INotifyPropertyChangedβ€˜ Interface inside β€˜ViewModelβ€˜ folder for managing Face Service UI view.
  • Create a folder Named β€˜Viewβ€˜ to store UI views of the application
  • Create a file named β€˜ImageSearchβ€˜ for Bing Image Search, the XAML code of same is :
    <?xml version="1.0" encoding="utf-8" ?>
    <ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
                 xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
                 x:Class="MSCogServiceEx.View.ImageSearch">
      <StackLayout Padding="5" Spacing="5">
        <StackLayout Orientation="Horizontal">
          <Entry Text="{Binding SearchString}"/>
          <Button Text="Search" Command="{Binding GetImagesCommand}"/>
        </StackLayout>
        <ActivityIndicator IsRunning="{Binding IsBusy}" IsVisible="{Binding IsBusy}"/>
        <ListView ItemsSource="{Binding Images}" CachingStrategy="RecycleElement">
          <ListView.SeparatorColor>
            <OnPlatform x:TypeArguments="Color" iOS="Transparent"/>
          </ListView.SeparatorColor>
          <ListView.ItemTemplate>
            <DataTemplate>
              <ViewCell>
                <StackLayout Orientation="Horizontal" Padding="10,0,0,0">
                  <Image HeightRequest="50" WidthRequest="50"
                         Source="{Binding ThumbnailLink}"/>
                  <StackLayout Padding="10" Spacing="5">
                    <Label Text="{Binding Title}"
                           TextColor="#3498db"
                           Style="{DynamicResource ListItemTextStyle}"/>
                    <Label Text="{Binding FileFormat}"
                           Style="{DynamicResource ListItemDetailTextStyle}"/>
                  </StackLayout>
                </StackLayout>
              </ViewCell>        </DataTemplate>
          </ListView.ItemTemplate>
        </ListView>
      </StackLayout>
    </ContentPage>
  • Create a file named β€˜EmotionExβ€˜ for Emotion API, the XAML code of same is :
    <?xml version="1.0" encoding="utf-8" ?>
    <ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
                 xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
                 x:Class="MSCogServiceEx.View.EmotionEx">
      <ContentPage.Resources>
        <ResourceDictionary>
          <Style TargetType="Button">
            <Setter Property="BorderRadius" Value="10" />
            <Setter Property="BorderWidth" Value="2" />
            <Setter Property="WidthRequest" Value="350" />
            <Setter Property="HeightRequest" Value="50" />
            <Setter Property="HorizontalOptions"  Value="Center" />
            <Setter Property="VerticalOptions"    Value="Center" />
            <Setter Property="FontSize" Value="Medium" />
            <Setter Property="BackgroundColor" Value="Blue" />
            <Setter Property="TextColor" Value="White" />
          </Style>    
        </ResourceDictionary>
      </ContentPage.Resources>
      <ScrollView>
        <StackLayout Spacing="10" Padding="10" HorizontalOptions="Center" >
          <Label Text="Emotion Service Example" FontSize="Large" FontAttributes="Bold" />
          <StackLayout Spacing="10">
            <Button Text="Take Photo to Check Emotion " Command="{Binding TakePhotoCommand}"/>
            <Button Text="Pick Photo to Check Emotion " Command="{Binding PickPhotoCommand}"/>
          </StackLayout>
          <ActivityIndicator IsRunning="{Binding IsBusy}" IsVisible="{Binding IsBusy}"/>
          <Label Text="{Binding Message}" FontSize="Large" FontAttributes="Bold" />
          <Image Source="{Binding SelectedImage}" />
        </StackLayout>
      </ScrollView>
    </ContentPage>
  • Create a file named β€˜FaceExβ€˜ for Face API, the XAML code of same is :
    <?xml version="1.0" encoding="utf-8" ?>
    <ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
                 xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
                 x:Class="MSCogServiceEx.View.FaceEx">
      <ContentPage.Resources>
        <ResourceDictionary>
          <Style TargetType="Button">
            <Setter Property="BorderRadius" Value="10" />
            <Setter Property="BorderWidth" Value="2" />
            <Setter Property="WidthRequest" Value="350" />
            <Setter Property="HeightRequest" Value="50" />
            <Setter Property="HorizontalOptions"  Value="Center" />
            <Setter Property="VerticalOptions"    Value="Center" />
            <Setter Property="FontSize" Value="Medium" />
            <Setter Property="BackgroundColor" Value="Blue" />
            <Setter Property="TextColor" Value="White" />
          </Style>
        </ResourceDictionary>
      </ContentPage.Resources>
      <ScrollView>
        <StackLayout Spacing="10" Padding="10" HorizontalOptions="Center" >
          <Label Text="Faces Service Example" FontSize="Large" FontAttributes="Bold" />
          <StackLayout Spacing="10">
            <Button Text="Take Photo to Check Faces " Command="{Binding TakePhotoCommand}"/>
            <Button Text="Pick Photo to Check Faces " Command="{Binding PickPhotoCommand}"/>
          </StackLayout>
          <ActivityIndicator IsRunning="{Binding IsBusy}" IsVisible="{Binding IsBusy}"/>
          <Label Text="{Binding Message}" FontSize="Large" FontAttributes="Bold" />
          <Image Source="{Binding SelectedImage}" />
        </StackLayout>
      </ScrollView>
    </ContentPage>
  • Inside App class constructor create the object of Views and bind them with View-Models and add them like following code
    public App()
           {
               var tabs = new TabbedPage
               {
                   Title = "MS Cognitive Services",
                   //BindingContext = new WeatherViewModel(),
                   Children =
                   {
                       new ImageSearch() {Title="Search Image" ,BindingContext = new ImageSearchViewModel() } ,
                       new EmotionEx() { Title="Emotion Ex.", BindingContext = new EmotionViewModel()},
                       new FaceEx() {Title="Face Ex.",BindingContext=new FaceViewModel()}
                   }
               };
    
               MainPage = new NavigationPage(tabs)
               {
                   BarBackgroundColor = Color.FromHex("3498db"),
                   BarTextColor = Color.White
               };
           }
    

This is how application looks on iPhone Simulator
942016Demo
So this is how we can use Microsoft Cognitive Services with Xamarin.Forms, let me know if I have missed anything.

:):) Happy Xamarin Coding :):)

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Architect
India India
Hi There, I am an IT professional with 14 years of experience in architecting, designing and building IT solutions for complex business needs in form of mobile & web applications using Microsoft technologies. Currently working in an multinational company in India as Solutions Architect. The articles here are sourced from my blog : http://techierathore.com/

Comments and Discussions

 
-- There are no messages in this forum --