|
Several of my previous posts have documented some of the challenges and proposed solutions that I've encountered whilst migrating our ASP.NET Web API services over to Azure. Here is a summary of the entire journey thus far.
As part of the work involved in delivering the new version of a mobile app to the market, we decided that we wanted to migrate the underlying infrastructure to the Azure platform as it would provide newer, faster infrastructure. This involved two primary objectives.
- hosting the ASP.NET Web API services on Azure
- implementing a service bus architecture with Azure Service Bus
Hosting the ASP.NET Web API services on Azure involved creating an Azure Web Site which would host the services. I subsequently made the necessary changes to the Team Foundation Server 2015 deployments by creating a new release / deployment for Azure. Each time a build is triggered, we deploy the new version to our Azure endpoint. Using Azure for hosting our services ensures maximum levels of availability and scalability, ensuring we can meet not just current demand but future demand too.
Configuring Application Insights to monitor our services is simple. Although our services don't need to be hosted on Azure to use Application Insights, it is easier if they are. So we have constant monitoring of our services, giving us regular metrics on the health and diagnostics of our services.
Implementing a service bus architecture using Azure Service Bus proved challenging as it requires a mental shift in how you think about services. In traditional service architecture, one service synchronously invokes another service. In a service bus arthitecture, services do not communicate directly with each other. Instead, all requests for services are added to a service bus queue, where they will be picked up and processed by a separate out-of-band service.
All data submitted from mobile devices through the app would be routed to an ASP.NET Web API service that would simply add the request onto the Azure Service Bus queue. This ensured all service requests would be highly responsive as the service was doing nothing more than adding a message to a queue and would be available to service further requests almost immediately. The actual processing of the request was fulfilled by a separate service.
Due to the disconnected nature of service bus architectures, whereby the recipient and client applications communicate via a service bus rather than directly with each other, it is impossible for the recipient application to know when a client application has submitted a request. What is needed is a polling mechanism that will constantly poll the Azure Service Bus looking for new messages. I achieved this using an Azure Function which is bound to the Azure Service Bus listening for incoming messages.
Upon receiving a new message from the Azure Service Bus, the message is processed by a routing service that routes the request to the appropriate ASP.NET Web API. The querystring parameter is used to indicate to the routing service what destination service it needs to be routed to. This ensures that the routing service forwards requests onto the appropriate service endpoints where the necessary business logic is implemented. The routing service also adds a high degree of flexibility for future development.
Using a combination of Azure's web hosting, Service Bus and Functions I have successfully delivered an end-to-end solution for processing requests from a mobile app which adds significant levels of scalability, resilience and responsiveness. As the chief architect on the project, and deeply involved in the majority of the implementation, as well as all aspects that involved Azure, I feel proud and excited by the results of the project. It has forced me to learn new ideas and technologies and has been a lot of fun.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When investigating using Azure Service Bus for our messaging architecture, I wanted some mechanism whereby I could receive and therefore process the messages that were added to the service bus queue. At first I wasn't sure how I would achieve this. As the various components involved in a service bus architecture have no direct means of communicating with each other, then there is obviously no way that the receiving application can know when a message has been added to the service bus queue by a client application. Therefore what is needed is a polling mechanism that will continuously poll the service bus queue looking for messages that have been added.
I didn't want to have to rely on local infrastructure, but wasn't sure what Azure had to offer. After some investigation, it seemed that the two most likely candidates were Azure WebJobs or an Azure Function. I decided to use an Azure Function as this seemed the best fit for what I was looking for. They are cloud hosted functions that have in-built support for listening to Azure Service Bus message queues. They also support a wide range of other Azure events and processes.
If your needs are fairly straight-forward, and you just want some mechanism for processing your Azure Service Bus messages, then Azure Functions are a good fit. If your requirements are more complex, then you may want to investigate Azure WebJobs. This article[^] proved instructive whilst setting up my Azure Function. I have two Azure Functions configured. One for testing and one for production. My unit tests post messages to the test Azure Service Bus queue, which is polled by the test Azure Function for messages. And conversely, I also have a production Azure Function which listens to the production Azure Service Bus queue for messages.
If your function needs to reference a third-party assembly or one of your own assemblies, then you will need to upload the assembly to the Azure Function. This article[^] explains the process. If you are referencing your own assembly, then you may want to investigate the various ways you can amend your continuous integration pipeline so that the assembly is kept up-to-date. I have added a new step to our Team Foundation Server 2015 build which invokes a batch file that FTPs the required assembly to our Azure Functions. Alternatively, Azure Functions support continuous integration from Gitub, Dropbox, Bitbucket to name a few. So keeping the assemblies used by your Azure Function in synch with your source code is simple. In fact, you can configure your Azure Function to consume not just assemblies, but the code itself from your continuous integration pipeline.
You will probably want to amend the default function code that is created when you first configure your Azure Function. Here is an example of a very simple Azure Function.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.ServiceBus.Messaging;
public static void Run(BrokeredMessage message, TraceWriter log)
{
log.Info($"C# ServiceBus queue trigger function processed message: {message.MessageId}");
try
{
if (message != null)
{
log.Info("Completing message.");
message.Complete();
}
}
catch (Exception ex)
{
message.Abandon();
log.Info($"Exception occurred: {ex.Message}");
}
} By definition an Azure Function should not be monolothic, but instead should be highly cohesive (just as when writing any function). So keep your Azure Functions short and focused.
Whilst testing the performance of processing messages using the Azure Function, I found it was significantly faster than when using local infrastructure. They are blazingly fast.
Azure Functions are very flexible and can be used in conjunction with many processes and tasks within the Azure ecosystem, so they are well worth checking out if you are looking at hosting your own processes.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 4-Jan-17 15:26pm.
|
|
|
|
|
There are obviously many different ways you could design the messages intended to be sent to / from your Azure Service Bus queue. So after some consideration I came up with the following design for a service bus message.
[DataContract]
[KnownType(typeof(MessageObjectEntity))]
public class MessageObjectEntity
{
[DataMember]
public string MessageType { get; set; }
[DataMember]
public object MessageContent { get; set; }
}
The MessageType property defines the type of the object that is contained within the message. MessageContent defines the actual object itself. This will be a serialised instance of the class which can then be deserialized when it is received by the receiving application. To allow different types of classes to be added to the message you need to add it as a KnownType().
For example to add instances of MyNewClass to your message you will need to add the following to your class declaration.
[KnownType(typeof(MyNewClass))] Here's a function that uses the MessageObjectEntity class to send messages to the service bus.
private static MessageObjectEntity CreateMessageForServiceBus<T>(T message)
{
return new MessageObjectEntity
{
MessageType = message.GetType().AssemblyQualifiedName,
MessageContent = message
};
}
The method uses Generics enabling the function to work with any object type, therefore allowing our code to send messages of any type to the service bus. We have basically wrapped our message inside another class, and it is this class that we send / receive from the service bus. So as far as our service bus code is concerned, the messages are always of the same type i.e. MessageObjectEntity. It's down to the receiving application to know what class is wrapped inside MessageObjectEntity so that it can deserialise it. And it knows what the type is as this is defined by the property MessageType from earlier.
And finally here's the calling code that invokes our method and adds our message to the service bus.
using Microsoft.ServiceBus.Messaging;
MessageObjectEntity messageToSend = CreateMessageForServiceBus(message);
BrokeredMessage brokeredMessage = new BrokeredMessage(messageToSend);
await this._client.SendAsync(brokeredMessage);
N.B. you will need to ensure you have downloaded the Nuget package for Azure Service Bus messaging before you can instantiate the BrokeredMessage class.
The instance property _client is an instance of the QueueClient class, which is the agent that communicates with your service bus queue.
private QueueClient GetServiceBusClient(string connection, string queuename)
{
return this._client ??
(this._client = QueueClient.CreateFromConnectionString(connection, queuename, ReceiveMode.PeekLock));
}
Sending messages to your service bus is straight-forward. Of course, you can implement something completely different to what I am proposing here. This is the design I have come up with that meets our requirements and fits into our existing architecture.
I'll discuss how I process messages on the Azure Service Bus in a future article.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Over the last couple of weeks I've been looking at service bus architectures, specifically with regards to Azure Service Bus. Since deploying our ASP.NET Web API services into the Azure cloud, I wanted to ensure that they were resilient and scalable. So I spent some time looking into service bus architectures. Both understanding the concepts and theory, as well as the practice.
The biggest mind-shift is the conceptual shift from direct service-to-service communication to completely decoupled services. Whereas previously all services communicated directly with other services, in a service bus architecture, no such direct communication exists. This takes a little getting used to. When you need to update another service, you add the request to the service bus queue and get on with the next task.
This architecture has many benefits. Firstly, it provides consistency. No matter what service you need to communicate with, it always involves sending / receiving messages to / from the service bus. The only endpoint you need to be interested in is that of the service bus. Whilst the messages will be different, the endpoint and architecture will be the same.
Secondly, from a client application perspective, the service request will appear highly responsive. This is because the service endpoint has simply dropped your request onto the service bus queue and is immediately available to service another request. The actual processing of the request will be undertaken later when the request is picked up from the service bus queue by a separate process. When this happens is down to how the service bus has been configured, but suffice to say that it will be processed in a time-frame acceptable to the business.
Scaling up the number of requests you are able to process becomes an almost trivial matter, but importantly, is an infrastructure problem. It no longer becomes a problem that the software developer needs to solve. Yes the software developer needs to write code that is capable of sending and receiving messages from the service bus queue, but how responsive those messages are and how many can be processed within a specified time-frame is largely an infrastructure problem.
Ensuring the requests are processed in the event of a failure is also an infrastructure problem. Instead of implementing retry patterns in your code, simply configure the retry mechanism in your service bus. Service bus architectures allow for messages to be placed back on the queue in the event of a failure where they can be re-tried at a later time. So if the database failed to update due to a deadlock or other lock contention, then fail the request, add it back onto the service bus queue, and try it again later.
A service bus architecture turns what was previously difficult to implement in software, into a mere infrastructure configuration.
I've been working with Azure Service Bus and have developed a simple proof of concept and associated unit tests. It has been surprisingly easy to work with. As you would expect from Microsoft, all the tooling needed to work with Azure Service Bus is available within the .NET ecosystem. Suffice to say, that I will be using Azure Service Bus from now on, including in my current project.
I will leave the details of how I have developed the applications that send and receive messages from the service bus for a future article.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When I was developing data-driven apps for the Android platform a few years ago, there would sometimes be intermittent connection issues which would cause the app to fail. Rather than fail the entire request and log the exception, I introduced a retry pattern. Each database method would be wrapped in a try / catch block. In the catch block would be a call to the same method. To prevent the code going into an infinite loop and throwing a stackoverflow exception, I implemented the retry pattern to try a configurable number of times before giving up and throwing an exception. By implementing a retry pattern in all database methods, the prevalence of exceptions in that part of the code all but disappeared.
I have implemented the same retry pattern with my ASP.NET Web API services. Rather than throw an exception in the event of a temporal communication problem, the retry pattern enables the method to be retried from the catch block up to a configurable number of retries. I have implemented this pattern for all database methods and controllers that fetch data from third-party services.
A retry pattern adds a layer of resilience to applications that are data-driven and / or consume third-party services, or indeed where there is any level of external communication between the various moving parts of an application.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When I first began looking into how to authenticate calls made to our ASP.NET Web API services, I began by looking at what Azure could offer in the first instance as that is where the services are hosted. Azure offers many different authentication providers including Azure Active Directory, Microsoft accounts and social integrations such as Facebook, Twitter and Google accounts.
I wanted an authentication provider that was programming language agnostic as we would be invoking the services from a C# and Javascript client applications initially. It also needed to be possible for external partners to consume our services if necessary, in which case we had no control over the client application whatsoever.
I decided on using JSON Web Token[^] (JWT) as it fits with these requirements very well. You have a JSON structure which contains your claims (username, email and so on) which is then encoded into a string. This encoded string is then passed from the client application to the ASP.NET Web API services for authentication. The service then decodes the string and asserts the claims contained within. The JWT can be passed as a querystring parameter, as POST data or as an HTTP request header parameter. I decided that passing the JWT as an Authorization HTTP request header would be the ideal choice for our requirements as it is a standard HTTP header parameter.
The way it has been configured is that we have an Azure SQL table that contains a list of clients. Each client has a private key which is in fact a GUID. This private key is used to encode / decode the JSON Web Token. Although we could easily pass the private key with the HTTP request, I have decided that it is more secure to simply look up the private key instead, thus negating the need to pass the private key with each request. Each request contains the client name instead, from which we can perform a lookup of the private key. We then use this private key to decode the token.
Each call to one of our ASP.NET Web API services must contain an Authorization HTTP request header. This header must be composed of the client name and their JSON Web Token string. I have written code that extracts this information from the request and authenticates it. The authentication code is part of our base controller class so that it can be easily re-used by all our services. If authentication passes then the Web API service request is processed as normal. If authentication fails then an appropriate HTTP response is returned in addition to the logging information that is captured to later diagnose why authentication failed.
To make testing authentication easier I have implemented an authentication controller that will enable client applications to test the authentication in isolation without having to actually make any actual requests to our services.
JSON Web Token is a very lightweight, simple and flexible authentication protocol that is supported on many different programming languages. If implementing external facing services where you have no control over the client application then it's a perfect choice.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of the process of introducing Continuous Integration (CI) / Continuous Delivery (CD) across the business, I had already configured a build and two releases for our ASP.NET Web API services. These releases were both to on-premise servers. One being a staging server and the other being a production server. As part of the migration to Azure, I had to create a new release that would deploy our services to an Azure endpoint. No problem I thought, as TFS 2015 has an in-built task for this. I duly created the new release, but came across a couple of differences when I queued a release for testing.
The first difference was that the build was failing with an error message stating that the build required a certificate. The certificate authenticates your on-premise TFS 2015 account with your Azure account, thus only allowing authenticated users to deploy into your Azure portal. The necessary details for creating the certificate can all be found in a downloadable XML file from your Azure portal. So I created the certificate and specified its name as part of the Azure Web App Deployment task.
The deployment kept failing however, with an error stating that it could not find the necessary build articles. For deploying to our on-premise servers you specify the build folder (containing the build articles created during your latest build) as the folder from which the deployment should look to find the files it requires. For deployments to Azure you need to specify the location and name of the zip file that the build has created (the deployment package). It took a little investigation to find this out. I couldn't understand why two of our builds deployed without a problem, yet the Azure one kept failing, despite using exactly the same source folder.
I have now got a fully working deployment of our ASP.NET Web API services to Azure runnning under TFS 2015. We still deploy to our on-premise staging and production servers for testing (the term 'production' probably needs to be renamed to something else as Azure is the production endpoint now).
So whilst the process for deploying to Azure are broadly the same, there are some key differences to be wary of that will catch you out.
Looking to the future, I would like to make use of deployment slots and remove our on-premise servers completely. I would therefore need to create development, staging and production slots which would mimic our current deployment process. When you deploy to Azure, you can specify which slot you want to use, so this would be a nice way of deploying to different environments for testing before a deployment to production.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've posted a couple of my latest posts on Dominic Burford – Medium[^] so feel free to head over there and tell me what you think. At this stage I'm still experimenting with the platform so would be interested to see what others think of this platform vs the CodeProject blog platform.
Is it an improvement? Should I post here as well as on Medium? Should I move all my posts to Medium? Be keen to hear your thoughts.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I began using Application Insights from Microsoft earlier this week and have to admit that I'm pretty impressed with it. Early on during the development cycle when I was implementing our Web API services, I knew I wanted some sort of monitoring tool that would give me regular feedback on various metrics such as the number of requests, response times, availability, failures etc. I initially began looking at Fiddler as this can be automated to provide much of these sorts of metrics. However, these metrics were primarily concerned with HTTP traffic, whereas I also wanted data on availability / performance across varying thresholds.
So I began looking at Application Insights. This is an extensible Application Performance Management service for web applications. It can be used to monitor live web applications to provide diagnostics and performance metrics. Exactly what I was looking for.
Application Insights can be added to your Visual Studio project if you want to write your own customised metrics, or you can simply install the Status Monitor on your server to capture run-time data on your web application. The latter option doesn't require any re-compile or re-deployment to your web application. Simply install and go.
Although our web services are implemented using the ASP.NET stack, Application Insights also works with Node.js and J2EE stacks too. It can be used on-premise or in the cloud, and can be integrated with your devOps process. So it's highly flexible and configurable.
Thus far I've configured metrics for availability, request / response rates, ping tests, failure rates and usage. These can be configured to run as often as necessary. For example, I have my ping tests set to run daily at 5 minute intervals. If there is an interruption to this, then I'll be immediately notified.
This provides real-time metrics and diagnostics on our endpoints, giving us timely and regular feedback on their health. Importantly, it also gives us feedback when we roll out new versions of our web services. Should there be a dip in response times for example, then we will get immediate feedback so we can diagnose the underlying issue, and rollback if necessary.
Application Insights provides exhaustive and extensible monitoring tools for your web applications, and I'm very impressed with how easy it was to configure them. All you need is an Azure account and you're good to go.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As with picking any battle, you need to decide if the effort is worth the reward. If there are only marginal gains to be made from making a substantial effort, then is it worth proceeding? Of course, determining the effort and the reward may be subjective.
What I mean by battles is where you become embroiled in a war of words or a bitter feud with your manager and / or a customer. This may escalate to the point where one or both of you you decide to part company. This can often lead to ill feeling if either party feel their voice has not been heard.
For example, suppose you are asked to make a change on behalf of a customer, and after careful consideration you conclude that making the change may lead (directly or indirectly) to a negative impact on the product. This then upsets or angers the customer who tells your manager that they are a paying customer and you should make the change. You reply back that they are not the only paying customer, and you have to consider the many other customers who are also using the product and who are not affected by the issue.
This leads to a battle between yourself (the technical authority with the most intimate knowledge of the impact of the problem), the customer (who wants their issue fixed as they are a paying customer who can go elsewhere if you don't comply) and your manager (who is trying to find some compromise that will please everyone).
I think every developer has been pitched into such a battle at some point or other. Possibly several times. I remember working for one particular IT Director (who shall remain nameless) who overrode the entire development team, including the IT Manager, and sided squarely with the customer. This created an Us vs Them situation that created much hostility, and led to several of the team leaving as it was clear that the upper management didn't have the backs of any of the team.
Treading the fine line between making adjustments to the code to keep a customer happy, and trying not to introduce any bugs for all your other customers, is never easy.
Performance gains that could be made following a code review may be important to only some of your bigger customers, but then again it's precisely
those bigger customers that keep the business afloat. This is where experience and good common business sense come into play. These are not trivial decisions to be taken lightly, and careful consideration and appreciation of both the technical and business trade-offs is important.
There are no rules of thumb for these situations. You certainly can't keep everyone happy, but at the very least you need to be transparent in your actions and communication to all those involved. You don't want a mutiny from the development team any more than you want a disgruntled customer.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from my previous post[^] about completing the build pipeline for our ASP.NET Web API services build process, I have also completed the unit testing and integration testing too. For every class / method there is a corresponding unit test. For every layer in the architecture, there are suites of tests. From the Controllers, to the Models, to the data layer and services, there are tests that exercise each particular part of the code-base.
These are developed during the development cycle, and added to the build once the code is checked in so they can then be exercised during the build process.
Most recently I have created a suite of tests that get executed after deployment that run a battery of tests against the actual (non-production) deployed endpoint. This is the perfect complement to the unit tests that are executed during the build. This ensures that the RESTful services operate correctly from their deployment location. We can use the output from these tests to provide evidence of scalability, performance and other infrastructural requirements.
We can be absolutely certain that after each code check-in (which triggers a build, which triggers a deployment) that the services operate correctly.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
At last I have managed to complete the build process for our ASP.NET Web API services. The build process uses Team Foundation Server (TFS) 2015 vNext and offers complete Continuous Integration (CI) and Continuous Delivery (CD) pipelines. Each check-in triggers a build which performs the following steps:
- Versions the assemblies (using a Powershell script)
- Performs a solution build
- Runs all unit tests
- Publishes the results from the unit tests (to the Team Foundation Server 2015 dashboard)
- Runs dotCover (from JetBrains) to give code coverage (currently at 88% coverage)
- Copies all unit tests and code coverage results to an IIS virtual directory so the results can be accessed anywhere from a browser
- Performs a build of the start-up project and creates a set of build articles ready for publishing
- Copies the published build articles to a known location on the TFS 2015 server ready for deployment (see next step)
When the build is complete, it automatically triggers a deployment to the development (staging) server ready for testing.
- The published build articles are deployed to the development server
Deployment to the production server is an ad hoc manual task for obvious reasons i.e. we don't want to publish un-tested code onto a production server.
From code check-in to deployment onto the development server, the entire process is automatic and seamless. All build output is pushed through our dedicated build-notification Slack channel.
I am very impressed with TFS 2015 and have found setting up and configuring a CI / CD pipeline to be quite straight forward. We are now looking into using the Agile features of TFS 2015 for managing our projects / workload. So next task is to look into creating epics, user-stories, sprints etc with TFS 2015. That will be for a future articles.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
After spending the last few months developing several mobile apps using cross platform technology in the form of Apache Cordova, I have come to realise that the claim of having one code-base that runs on all the mobile platforms is not strictly true. In fact, it's probably a pipe dream in most cases.
This is not a criticism of any of the cross-platform tools. We used Apache Cordova in conjunction with Telerik Platform and found it to be excellent.
That said, we came across several issues in testing where the different mobile platforms looked and / or behaved slightly differently from each other. None of these were show-stoppers, but were more niggles. But they were still niggles that took time and effort to resolve.
In particular we had issues with Apple and to a lesser extent Windows Phone. Android worked like a charm.
For example we came across scrolling issues on iPhone and iPad devices during testing. When trying to scroll down through a screen, it would sometimes appear "sticky" and take a few swipes to get it to scroll. Turns out this is a well known issue on Apple devices and can be resolved by changing some of the Apache Cordova config settings.
So whilst most of the apps all worked well across the various mobile platforms, we did come across several issues that were specific to a particular platform, and which took time to fix.
Having one code-base running on all mobile platforms is a great marketing claim, but the reality may be somewhat different.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Have you heard of or maybe used Xamarin, and if so, any thoughts?
|
|
|
|
|
I have used Xamarin extensively, check out my articles on here relating that technology. And exactly the same argument can be made there too. Whilst it takes care of the common functionality nicely, you still need to create separate assemblies for each of the platforms where you want something specific.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
After considerable effort and much re-work, re-design and re-architecting, the apps I have been working on have eventually been released to the app stores (Windows, Android and Apple). The original apps were in fact responsive web pages and were simply iFrames that linked to these web pages. Whilst this was sufficient in the short term to simply get a mobile presence and to be able to tell customers "We have an app", as a long term strategy it was not sufficient. Especially if you wanted to harness any of the native functionality of the devices.
So the decision was taken to completely re-develop the apps in a technology of our choosing. We went for Telerik Platform as it gave us much more than just a development platform.
- a testing platform
- data access
- notifications
- extensive plugins
- business services platform
- analytics and crash reports
- feedback
- user-management
and many other benefits.
So I took the original apps and completely re-developed them using Telerik Platform, which involved using Kendo UI, javascript, the MVVM pattern and Apache Cordova. The direction we took was to go hybrid so we have one code-base for all the mobile platforms.
Last week we released the apps to the apps stores and customers have been downloading and using them, which is a great feeling.
We still have many features we want to add to the apps, so they are still a work-in-progress. But for now, we're going to let people use them and wait for any feedback we get.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from Five Truths about software development III[^]
1. When you've checked in all your code and feel all smug as you wait for the rest of the team to finish their work for the sprint, you realise that 6 new bugs have been raised needing your attention before the software can be released. Suddenly you're the bottleneck.
2. When the only estimate you can give to the project manager is "How long is a piece of string".
3. The terror you feel when you have to upgrade your development environment in case it breaks something.
4. Triple checking the question you're about to post on Stackoverflow as you just know there's going to be some smart asses who wil pick your question apart or just plain downvote it.
5. The buzz you feel when you finally fix that bug that's been evading you for days
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from my earlier post Could this function be unit tested without modifying it?[^] the simple answer is yes it can. In this post I will explain how.
To summarise, the function that caused the problem used the current date to make a particular calculation using DateTime.Now . The original unit tests I wrote one day and all passed, then subsequently all failed the very next day as the function was returning a different result based on the current value of DateTime.Now .
My initial approach was to pass in a default value for the date. If no date was supplied then DateTime.Now would be used. Otherwise the supplied DateTime argument would be used. But I wasn't keen on this approach. Supplying default arguments to functions just so they can be unit tested had a bad code smell.
I had a look at the Pex and Moles[^] framework which supports unit testing by providing isolation by way of detours and stubs by allowing you to replace any .NET method with a delegate. This sounded pretty cool and I very nearly took this approach.
In the end however, I opted for a Dependency Injection approach. The benefit of this approach is that it forced me to refactor the code to make it less reliant on the environment, and that's a good thing.
In the code snippet below you will see I have defined an interface called IDateTime which contains one property called Now of type DateTime . The class ReportLibrary then contains a reference to this interface called _datetime . A private class called ActualDateTime implements IDateTime .
We then need to define a default constructor and an overloaded constructor (as we will pass in the required DateTime in the constructor). If we create an instance of the ReportLibrary class with no parameters (as our application will do) then the value for _datetime is defaulted to DateTime.Now . If we pass in a value to the constructor (as our unit tests will do) then we assign that value instead to _datetime .
namespace CoreLibrary
{
public interface IDateTime
{
DateTime Now { get; set; }
}
public class ReportLibrary
{
private readonly IDateTime _datetime;
private class ActualDateTime : IDateTime
{
public DateTime Now { get; set; }
}
public ReportLibrary()
{
this._datetime = new ActualDateTime { Now = DateTime.Now};
}
public ReportLibrary(IDateTime datetime)
{
this._datetime = datetime;
}
public int MyFunction()
{
int result;
DateTime today = this._datetime.Now;
return result;
}
}
}
So here is how we will invoke the function from the application.
ReportLibrary reportLibrary = new ReportLibrary();
int result = reportLibrary.MyFunction();
And here is how we invoke the function from the unit tests. Firstly we need to define a class that implements our IDateTime interface. Define this at the top of our unit test class.
[TestClass]
public class ReportLibraryTests
{
private class MockDateTime : IDateTime
{
public DateTime Now { get; set; }
}
}
Then in the test method we create an instance of this class and assign the Now property with the required value for DateTime.Now .
[TestMethod]
public void MyFunctionTests()
{
IDateTime dateTime = new MockDateTime();
DateTime today = new DateTime(2016, 8, 3);
dateTime.Now = today;
ReportLibrary reportLibrary = new ReportLibrary(dateTime);
int result = reportLibrary.MyFunction();
Assert.AreEqual(999, result, "Invalid result for 'MyFunction'");
}
And that's how I managed to isolate the function so that it works both in the application without any modification to its signature, and in the unit tests whereby different values for DateTime.Now can be supplied
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have been writing a lot of unit tests recently, as we move the business logic from our reports into business objects that are then invoked by the reports. To ensure that the migrated business functionality works as expected, I've been writing unit tests to ensure that this is the case.
I came across one particular function recently that had my head scratching. The unit tests I wrote that invoked this particular function all worked yesterday, but then they all broke today. A quick scan through the code highlighted the obvious issue. The function returns the estimated mileage for today, given certain parameters. The function therefore determines what today is (the function is written in C# and so uses DateTime.Now ). So obviously the values returned from this function will return different results every day.
Without modifying the code to the function, I was wondering if there was any way to unit test this particular function. The solution I went for in the end was to pass in a default value for today. If no value for today is passed in as a parameter (which is how the function is invoked by the reports) then today is assumed to be....well....today If a value is passed in for today (as is the case with my unit tests) then this value is assumed to be today.
The concern for me was that I had to amend my function to accommodate unit testing. Surely I shouldn't have to amend a function just so it can be unit tested.
Which got me wondering. Is there a way around this particular problem? Are there other scenarios where a change needs to be made to a function to accommodate unit testing?
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
To answer my own question, the answer is yes....it is possible to fake (or mock) any .NET method using delegates using Moles as in the Pex and Moles[^] framework.
Here's an example[^] of exactly what I need
Pretty cool
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have spent many years working with various build tools, with a particular emphasis on creating continuous integration (CI) environments. I have worked with Nant, MSBUILD, Cruisecontrol.NET, TeamCity, Git and Subversion to name a few. I have configured automated tests as part of the build process using Nunit and automated the signing and packaging of an Android app, all using various build and CI tools.
So I have a pretty good understanding of the build process and the tools and environments that support it. For this reason I haven't used the Application Lifecycle Management (ALM) features found within Team Foundation Services (TFS) as I haven't found them to be best-of-breed. However, that was then, and this is now. I've recently been using the latest build features in TFS 2015 and have to admit that I'm very impressed with them.
They are very much improved from previous versions of TFS. One of my biggest criticisms was that TFS only really worked within the Microsoft build tools ecosystem, and didn't play particularly well with other build tools and systems. That's all changed in TFS 2015 though. The build tools in TFS 2015 now support practically any build tool or platform you care to mention. Git, Maven, Nant, Android, iOS, scripting languages etc. The list of supported tools and platforms is extensive and impressive.
I managed to get a complete build setup and configured with ease. This included a full build, continuous integration triggering from TFS and a deployment to our development server. From code being checked-in to being deployed on the test server is all automatic and takes just a few minutes to run.
This is an ideal build solution for anyone looking to automate and simplify their build process and which is capable of supporting many different build tools and platforms. The Rolls Royce build solution is still TeamCity, but this an impressive build platform nonetheless and one that I am happy to keep using.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've recently been using Telerik Platform for mobile development, so I thought it may be useful to give an overview of my experiences, especially as I've previously used Xamarin for mobile development.
For those that don't know, Telerik Platform is a complete mobile development ecosystem consisting of a suite of Telerik technologies rolled into a single unified platform. There are tools for development, testing as well as backend services such as data services, email / SMS services and business logic services, analytics and user management. All of these are accessed from your Telerik Platform account (depending on your subscription level of course).
The backend services may be implemented in your app using any combination of the folling APIs and SDKs.
- Javascript SDK
- .NET SDK
- iOS SDK
- Android SDK
- RESTful API
As can be seen, all mobile platform development environments are available to the developer.
All the tools you need across the entire lifecycle of your mobile app are all accessed from a single location. Unlike other development platforms, Telerik Platform is more than just a development tool, it contains the full complement of tools to manage the entire lifecycle of your app.
There are two key options to choose from when deciding how to build your app. You can choose either a hybrid app or a native app. I won't go into the pros and cons of the different approaches as this is beyond the scope of this article. Suffice to say that if you opt for a hybrid app you are using Apache Cordova. If you opt for a native app you are using Telerik NativeScript. In my case I was building a hybrid app. The reasons were as follows:
- We wanted to target all mobile platforms (at the time of writing NativeScript does not support Windows Phone)
- The app was fairly straight-forward with limited access the device's capabilities
- The learning curve was less steep due to the fact that a hybrid app employs web skills such as HTML, CSS and Javascript, all of which are familiar to me. The areas where I was less familiar was with using Telerik's Kendo UI components and the underlying MVVM architecture. Thankfully there are numerous examples and documentation, and being a competent web developer I picked these up fairly quickly.
The backend services are really a powerful addition to the development experience. The option of using these from the cloud reduces the reliance on local infrastructure, such as data and email servers. You can build your entire app from their cloud portal. There is a Visual Studio plugin you can download, which is useful, but I predominantly used their cloud portal.
One of the best things I enjoyed about Telerik Portal is their AppBuilder technology. This allows you to test your app in a simulator with varying combinations of platform (Apple, Android, Windows) and screen resolutions. And best of all, you can download the app to your own device using the Telerik development apps. Building your app generates a QR code. This QR code is then scanned using your device using the Telerik development app, which then copies the app to the device. No pesky USB cables or installing/configuring emulators are required. This is a genius piece of technology. Your code changes are reflected immediately in the simulator, and can be tested on the physical device by simply swiping the Telerik development app. From coding to testing on a physical device has never been simpler.
All in all I have been very impressed with Telerik Platform, and would certainly recommend that it be included on your list of candidate development platforms if looking at going into mobile.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 8-Jul-16 15:42pm.
|
|
|
|
|
Whilst recently investigating various mobile platforms with a view to making a decision as to which of the many mobile technological ecosystems we should opt for, it became clear that what was really needed was some direction from the business. Without a full appreciation of where mobile fitted into the company's overall business strategy, it was impossible to gain any real traction on the problem.
For example, having definitive answers to questions such as these is important.
- What are the company's overall corporate objectives?
- How can a mobile initiative help the company in meeting these objectives?
- What does the company hope to achieve by using mobile technology?
Before making any decisions regarding mobile technology, it's important to unsderstand as clearly as possible how the mobile strategy is aligned to the overall business strategy. If the two are not aligned, then do you really need a mobile offering in the first place? Assuming that you do need a mobile offering, it should be clear where it fits into the overall business strategy.
Before embarking on a mobile strategy, it may be useful to know how many of your existing customers are currently using the web version of your application (assuming you have one). This will give you hard numbers to help sell the idea to other areas of the business. It will also tell you what devices your customers are using you so know what mobile platforms to target.
It may be the case that all you need is a responsive web site which can then be packaged for the required mobile platforms. This is a good first step onto the mobile landscape without incurring the costs of going full blown mobile. From here, you can then gauge how your app is received and decide what next steps to take (if any).
The key is to ensure you have a clear business strategy and that the mobile platform forms a cohesive part of that strategy. Going mobile just for the sake of it is not a strategy.
Going mobile needs to align with the overall business strategy to stand any chance of success.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Well I'm currently working my notice with my current employer and start my new career next month. I'll be building responsive web sites and hybrid mobile apps using tools including Bootstrap, Telerik Platform and Cordova. These are technologies which I am not familiar with so I'll have to get learning. I'm at least familiar with web technologies (HTML5, CSS3 and Javascript) so this at least gives me a headstart on building web enabled pages.
Having previously used Xamarin.Android to build cross-platform mobile apps, it will be interesting to see the differences in building, testing and deploying hybrid mobile apps.
Should be fun. Exciting times ahead
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
|