|
The latest version of our app is nearly ready to drop into the app stores. It's been a challenging project with more than a few hurdles along the way. The project remit initially was to add the ability for our drivers (the user's of the app) to track their journeys to allow them to make mileage expeneses claims. By logging their journeys, drivers would have evidence of their claimed mileages. At the outset this seemed like a really useful feature that should be straight-forward. Tracking a user's movements using an app is certainly nothing new, so we certainly didn't foresee the issues we would later encounter.
The current app is developed using Xamarin Forms. All functionality and business logic is supplied to the app by way of ASP.NET WebAPI RESTful services all of which are hosted on Azure. All of the services are processed by an Azure Service Bus to ensure we can scale the app and to add resiliency. The majority of the code within the current app is shared code (Xamarin Forms apps allows for the code that is the same across the different platforms to be contained in one project, whilst the Android and iOS specific code are contained in separate projects respecively).
We began the project with the aim of keeping as much of the journey logging code in the shared project to keep the platform specific code to an absolute minimum. This ambition was quickly forgotten when we got down to the details of the project. We soon realised that it wasn't possible to fully realise the journey logging functionality without writing a lot of platform specific code as so much of it was tied to the specific hardware on the devices. Although the cross-platform geolocator service we used ran on both platforms (courtesy of James Montemagno), we wanted the ability to run the tracking service as a background process.
Current devices have very strict constraints as to how they will execute long running processes (and quite rightly too). We needed to run the journey logging in the background, as modern platforms just don't support executing long running processes in the foreground. These constraints were different across the different platforms. Android and iOS handle long running processes differently, and their constraints and solutions are unsurprisingly different too.
We next wanted to enable local push notifications to keep the driver informed that the tracking service was still recording in the background. Especially if the user brought another app to the foreground, made a phone call, or in some way forced our app to the background. Local push notifications are handled completely differently on the different platforms, leading to further platform specific code. Implementing the journey logging service as a background process, and implementing local push notifications all entailed having to write vast swathes of platform specific code.
All of these platform specific deviations brought up brand new problems, and showed the many discrepencies between Android and iOS. Although Xamarin Forms does a magnificent job of hiding as much of these deviations as possible, there were many times on this project when we were fully exposed to the inner workings of the platforms and needed a deep understanding of the native APIs. iOS particularly threw up many problems. In particular, it was incredibly difficult trying to submit large journeys as a background service. On Android, it was relatively straight forward getting large uploads to run in the background. It was far more technically challenging on iOS due it's vastly more restrictive environment and permissions.
The services that support and provide all the functionality to the app are all ASP.NET WebAPI RESTful services. The services needed to support the journey logging functionality were initially thought to be straight forward. All we would need were services that would allow the journeys to be created, updated and deleted from the device. During initial testing with the app we came across several issues when trying to submit journeys from the device to our cloud hosted Azure SQL DB. Initially we ran into issues when submitting journeys from our development environment. After much head scratching and investigation I eventually pinned this down to a rule on our firewall that was set to truncate any outgoing traffic that exceeded 1MB in size. After resolving this issue, we ran into a further similar problem when we attempted to submit journeys from our staging (Azure) environment. We were getting SocketException errors. After further diagnosis we found that we could send smaller packets of data successfully. The error only appeared when attempting to send large journeys in one go. So I had to write a chunking algorithm to decompose larger journeys in multiple smaller journeys. This required making substantial changes to the underlying WebAPI service, as well as changes to the app code itself.
Another new feature of the app is the ability to create the main menu dynamically. Against each company we store a list of the menu options that will be available to them when they open the app. This gives us the ability to turn app features on and off for a company dynamically. Each time the app is launched we check the menu options dynamically at run-time. This allows us to update a driver's list of menu options without them even having to log-off or restart the app. It's all done while the app is running.
We're now in the final stages of testing the app and are hoping to have it in the stores very soon. It has been a real trial-by-fire. We encountered many problems along the way, and with much grit and determination, have managed to overcome all of them. The project has been challenging to say the least, but ultimately successful thanks to the sheer determination of the development team.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of the development of our new app feature, we are adding the ability to allow users to track their journeys. They can Start / Stop the journey tracking and allow the app to record their distance, time taken etc. This is primarily to be used to allow users to support mileage claims.
A journey takes the form of a Model containing properties for storing the user, start date, end date, mileage etc. The journey also contains a list of waypoints. These are the longitude / latitude points that are generated by the user's position. A waypoint is taken every 5 seconds and added to the list of waypoints for the journey. From these waypoints we can then generate a map of their journey and display this to them.
During initial testing everything was fine as we tested the feature on smaller journeys containing a few hundred waypoints. As we began stress testing the feature, we noticed we were getting timouts as we started to exceed 800 or so waypoints. By 1000 waypoints were getting regular timeouts. We discovered the reason was due to the volume of waypoints we were posting back to our service. As the number of waypoints grew, the time taken to POST the data over our RESTful service grew too. And this was causing our timeout problem.
I investigated several options, but the cleanest and most simple was to chunk the waypoints into smaller discrete lists which we would POST. So instead of POSTing all of the waypoints in one large payload, we would instead send multiple smaller payloads.
So how do you chunk your list into a list of smaller lists? There are many ways of achieving this, and I'm sure those of you reading this article will be able to suggest your own versions of the algorithm I have used here. Firstly, instead of using a hard-coded version that only works with journey waypoint lists, I have implemented an extension method that can work with any type of list. This allows me to chunk any type of list data going forwards (I have already got a few ideas in mind of how I will reuse this extension method).
public static IEnumerable<IEnumerable<T>> GetChunk<T>(this IEnumerable<T> source, int chunksize)
{
if (chunksize <= 0 || source == null) yield return null;
var pos = 0;
while (source.Skip(pos).Any())
{
yield return source.Skip(pos).Take(chunksize);
pos += chunksize;
}
} So we can see that what is returned is a list of lists of type T. The extension method is applied to the source list (which is to be chunked into smaller lists). The parameter to the extension method is the number of items to appear in each chunked list. The implementation uses the LINQ methods of Skip() and Take() to iterate over the list. The Skip() method will ignore the first n items in the list. The Take() method will then take the next n elements from the list. These methods therefore when used in conjunction easily iterate over our list. The use of yield return helps to iterate over the list in an efficient manner by processing the next item without having to process the entire list (lazy evaluation).
In our specific case for chunking our journey waypoints, we have set the chunking value to 500. Although the problem didn't appear until at least 800 items, I wanted to keep the value to a safe, low limit just to err on the side of caution.
Here is the code from one of the unit tests I've written that exercises the chunking extension method.
[TestMethod]
public void GetChunk1000Tests()
{
const int waypointcount = 1000;
var journey = ListExtensionsTests.GetTaskTrackedJourneyForUnitTest(waypointcount);
Assert.IsNotNull(journey);
Assert.IsNotNull(journey.Waypoints);
Assert.IsNotNull(journey.Waypoints.Waypoints);
Assert.IsTrue(journey.Waypoints.Waypoints.Any());
Console.WriteLine($"ChunkCount: {ListExtensionsTests.ChunkCount}");
Console.WriteLine($"Number of waypoints: {journey.Waypoints.Waypoints.Count}");
Assert.IsTrue(journey.Waypoints.Waypoints.Count == waypointcount);
var result = journey.Waypoints.Waypoints.GetChunk(500);
Assert.IsNotNull(result);
var enumerable = result.ToList();
Console.WriteLine($"Number of chunks: {enumerable.Count()}");
int incrementalwaypointcount = 0;
foreach (var item in enumerable)
{
Console.WriteLine($"Number of chunks in list {item.Count()}");
incrementalwaypointcount += item.Count();
}
Assert.AreEqual(waypointcount, incrementalwaypointcount);
} So in summary, if you are dealing with large lists of items and need to break them down in smaller, more manageable lists, then chunking them is a simple and very effective solution. This works great in our mobile app where we are sending large lists of data to a backend service using the resource hungry environment of the smart phone where memory and processing power are in short supply. Processing numerous smaller lists is more efficient (and less error prone) than trying to process one large list. It also uses less resources (memory, CPU) to do so.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In lambda calculus, a predicate is an expression that evaluates to either true of false. If you have written any LINQ or a SQL query you have probably written these types of expressions already. If you have written a SQL query that contains a WHERE clause for example, this is a type of predicate. If you've ever used LINQ to filter the contents of a list, this too is an example of a predicate.
Whether you realise it or not, you have probably already used predicates in your code. Whenever you have a need to filter the items in a dataset and / or list, then it is common to use predicates to do this. The notion of a predicate is widely used and understood, even if you weren't necessarily aware of them.
Within the .NET Framework the notion of a predicate is formally identified by
Predicate<T> This is a functional construct providing a convenient way of testing the truthy or falsity of a given expression relating to an instance of type T. If you're familiar with delegates then Predicate<T> is equivalent to
Func<T, bool> For example suppose we have a Car class that represents T. Each instance of T (Car) contains the properties Colour (red, green, black etc) and EngineSize (1000, 1200, 1600 cc etc).
public class Car
{
public string Colour { get; set; }
public int EngineSize { get; set; }
} Let's assume that we have a SQL query that returns a list of all the cars registered for a particular year.
var data = new DataService();
List<Car> = data.GetAllRegisteredCars(new DateTime(2019, 01, 01); The above query will return all cars registered during the year of 2019.
Suppose we want to filter that list of cars to just those that meet certain criteria e.g. those cars with an engine size of 1600cc or are blue in colour. To filter the data we would use predicates as follows.
var matches1 = cars.FindAll(p => p.EngineSize == 1600);
var matches2 = cars.FindAll(p => p.Colour == "Blue"); We could hardcode the predicates and leave them in the code as in the above examples. However, a benefit of using Predicate<T> in your code is that it gives you the ability to separate the data from the expressions used to filter it. Instead of hardcoding filters in your code, you can define these elsewhere and bring them into your code when needed.
Let's assume we have a completely separate class that defines our predicates called PredicateFilters.cs
public static class PredicateFilters
{
public static Predicate<Car> FindBlueCars = (Car p) => p => p.Colour == "Blue";
public static Predicate<Car> Find1600Cars = (Car p) => p => p.EngineSize == 1600;
} In our data code we would now write the following code to filter the cars.
var matches1 = cars.FindAll(PredicateFilters.Find1600Cars);
var matches2 = cars.FindAll(PredicateFilters.FindBlueCars); We can see even from this simple example that separating our queries from our code is straight-forward. We no longer need to pollute our code with hardcoded filters. We also have the ability to reuse those filters elsewhere. For example, we may have more than one function that needs to know which cars are blue. We write the filter once and use it everywhere we need it. If in the future it turns out that it's red cars we need instead of blue, we can change the filter in one place without having to change any of our data code.
Our filters may return a single item or may return a list of items. Alternatively, we may also want to know the number of items returned by our filter. We would probably want to do this for different types of data e.g. cars, drivers, orders etc. This is where we need to get a bit smarter with how we design our filters to allow them to work with different types of data.
Let's start by implementing an interface that defines the filters we want to execute on our data.
public interface IPredicateValue<T>
{
T GetValue(List<T> list, Predicate<T> filter);
List<T> GetValues(List<T> list, Predicate<T> filter);
int GetCount(List<T> list, Predicate<T> filter);
} Here we have defined an interface that takes a type of T. The functions will provide the following functionality.
- T GetValue(List<T> list, Predicate<T> filter) - return a single instance of T for the filter
- List<T> GetValues(List<T> list, Predicate<T> filter) - returns a list of T for the filter
- int GetCount(List<T> list, Predicate<T> filter) - returns the count of items of T that match the filter
For each type of data that we want to filter, we should implement this interface. This will provide a consistent set of methods that we can use to filter our data.
public class CarPredicate : IPredicateValue<Car>
{
public Car GetValue(List<Car> list, Predicate<Car> filter)
{
if (list == null || !list.Any() || filter == null) return null;
return list.Find(filter);
}
public List<Car> GetValues(List<Car> list, Predicate<Car> filter)
{
if (list == null || !list.Any() || filter == null) return null;
return list.FindAll(filter);
}
public int GetCount(List<Car> list, Predicate<Car> filter)
{
if (list == null || !list.Any() || filter == null) return 0;
return list.FindAll(filter).Count;
}
} We can now filter our data as follows.
var data = new DataService();
List<Car> = data.GetAllRegisteredCars(new DateTime(2019, 01, 01);
var predicatevalue = new CarPredicate();
var blueCars = predicatevalue.GetValue(data, PredicateFilters.FindBlueCars); Keeping your code and your predicates separate gives you far more flexibility, as well as giving you a single point of change should one of the expressions used to query your data need to change. You can implement filters for any / all types of data with the added benefit that it allows you to filter your data in a consistent manner.
If you want to get serious about how you filter data, then give predicates a try.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
|
I actually really enjoy writing stuff like this. I wrote an article about writing flexible RESTful services that was very similar to GraphQL (having since checked out GraphQL I can see the similarity). GraphQL was developed by the multi-billion dollar Facebook empire, my version was developed by little ol me
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from a previous article[^] I wrote as an introduction to writing asynchronous code with .NET, I want to describe a common problem I see developers making when they begin writing asynchronous code beyond the basics. A common mistake developers make when they first start writing asynchronous code using the .NET Framework, is to write blocking asynchronous code. I've seen this problem on Stackoverflow and with developers I have worked with directly (both junior and senior).
Rather than try to explain the problem, I'll give some example code that should hopefully highlight the problem. Here's an ASP.NET Web API RESTful service being invoked from a client application.
The back-end service code is taken from one of our RESTful services that returns vehicle telemetry to a client application. For the purposes of clarity, I have omitted all logging, error checking and authentication code.
public async Task<string> Get(string subscriber, string trackertype)
{
var response = await this.GetData(subscriber, trackertype);
return response;
} And here is a client that invokes the RESTful service. In this example the client is a unit test.
[TestMethod]
public async Task GetVehicleTests()
{
TrackerController controller = new TrackerController();
string subscriber = "testsubscriber";
string vehicle = "testvehicle";
var response = controller.Get(subscriber, vehicle);
Assert.IsNotNull(response);
Assert.IsNotNull(response.Result.ToString());
} The above unit test code will deadlock. Remember, that after you await a Task, when the method continues it will continue in a context.
1. The unit test calls the Get() RESTful service (within the ASP.NET Web API context).
2. The Get() method in turn calls the GetData() method.
3. The GetData() method returns an incomplete Task indicating that the Get() method has not yet completed (with the same context).
4. The Get() method awaits the Task returned by the GetData() method (the context is saved and can be re-instated later).
5. The unit test synchronously blocks on the Task returned by the Get() method which in turn blocks the context thread.
6. Eventually the Get() method will complete. This in turn completes the Task that was returned by the GetData() method.
7. The continuation for Get() is now ready to run, and it waits for the context to be available to allow it to execute in the context.
8. Deadlock. The unit test is blocking the context thread, waiting for the Get() method to complete, and GetData() is waiting for the context to be available so it can complete.
How can this situation be prevented? Simple. Don't block on Tasks.
1. Use async all the way down
2. Make (careful) use of ConfigureAwait(false)
For the first suggestion, awaitable code should always be executed asynchronously. So given the example code here, the unit test was not correctly awaiting the result from the RESTful service. The unit test code should be modified as follows.
[TestMethod]
public async Task GetVehicleTests()
{
TrackerController controller = new TrackerController();
string subscriber = "testsubscriber";
string vehicle = "testvehicle";
var response = await controller.Get(subscriber, vehicle);
Assert.IsNotNull(response);
Assert.IsNotNull(response.ToString());
} Like a handshake, whenever you have an await at one end of a service call, you should have async at the other.
The use of ConfigureAwait(false) is slightly more complicated. When an incomplete Task is awaited, the current context is captured to allow the method to be resumed when the task eventually completes e.g. after the await keyword. The context is null if invoked from a thread that is NOT the UI thread. Otherwise it returns the UI specific context depending on the specific platform you are using e.g. ASP.NET, WinForm etc). It is this constant context switching between UI thread context and worker thread context that can cause performance issues. These issues may lead to a less responsive application, especially as the amount of async code grows (due to the increased volume of context-switching). Yet this is exactly what we are trying to solve by using asynchronous code in the first place.
There are a few rules to bear in mind when using ConfigureAwait(false)
- The UI should always be updated on the UI thread i.e. you should not use ConfigureAwait(false) when the code immediately after the await updates the UI
- Each async method has its own context which means that the calling methods are not affected by ConfigureAwait()
- ConfigureAwait can return back on the original thread if the awaited task completes immediately or is already completed.
A good rule of thumb would be to separate out the context-dependent code from the context-free code. The goal is to reduce the amount of context-dependent code (which can typically include event handlers).
We can modify the Get() RESTful service as follows.
public async Task<string> Get(string subscriber, string trackertype)
{
var response = await this.GetData(subscriber, trackertype).ConfigureAwait(false);
return response;
} Deadlocks such as this arise from not fully understanding asynchronous code, and the developer ends up with code that is partly synchronous and partly asynchronous.
By following the suggestions in this article, you should see performance gains in your own code, as well as better understanding how asynchronous code works under the hood.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from a couple of my previous articles, I would like to both reinforce the ideas I laid out in them, as well as consolidate those ideas. In an article[^] from October 2017 I described a pattern I use for designing and implementing RESTful APIs, specifically with regards to implementing RESTful GET APIs. In an article[^] from July 2018 I described the principle of reducing the client surface area, and how this leads to cleaner, simpler and less complex code, particularly with regards to implementing RESTful APIs.
Where I currently work, we have a library of RESTful ASP.NET Web APIs that our web and mobile applications consume. These cover many different types of query as they are used in many different ways by the particular applications. For example the mobile app (which is aimed at fleet drivers) fetches data for the currently signed-in user, their latest mileage updates, their account manager, journeys they have made etc. The web application fetches data relating to users, roles, permissions, documents etc.
These are all GET methods that perform a variety of different queries against different data types. When designing the client API surface required for all these APIs I wanted to make them all consistent, irrespective of what data was being returned, or what query filters were being specified.
To clarify the problem a little further, I wanted to use the same client API for all data types e.g. mileage, user, company, journey etc. Further to this, I wanted the way in which the data was queried to be consistent. Example queries are listed below.
- Fetch me the mileage data for this user
- Fetch me the mileage data for this date range
- Fetch me the journey data for this date
- Fetch me the journey data for this user
- Fetch me the permissions for this user
- Fetch me the documents for this user
These are all queries that work on different data (mileage data, journey data, permissions data, documents data) and interrogate the data in different ways (by user, by date). Crucially, I wanted all of these queries to map onto a single GET API for consistency, and to reduce the complexity of the client (by reducing the client facing API to one API instead of multiple APIs). Reducing the client facing API is the principle of reducing the surface area of the client.
I finally came up with the following API design.
- I have a single controller with a GET method that accepts two parameters.
- The first parameter is a string that designates the type of query e.g. "getmileagebyuser", "getjourneybydate" etc
- The second parameter is a serialised query object that contains the values needed to query (or filter) the data e.g. the user ID, the date or whatever filters are required to satisfy the request.
- All queries must return their data as a serialised string (which the client can de-serialise back into an object).
For the purposes of clarity the code examples used here have omitted error checking, logging, authentication etc to keep the code as simple as possible. In my own library of RESTful APIs I have separated out the requests made by the mobile app from those made by the web app. I therefore have two controllers, each with a single GET method that does all the heavy lifing of fulfilling the many different query requests. I have created a different controller for each type of client so as to prevent the controllers from bloating. You can separate out the requests any way you want. If you don't have many queries in your application, then you could simply place all of these query requests in a single GET method in a single controller. That is obviously a design decision only the developer can make.
The controllers are called MobileTasksController and WebTasksController. For the purposes of this article I will focus on the latter controller only, although they both employ the same design pattern that I am about to describe.
First let's define our basic controller structure.
public class WebTasksController : BaseController
{
public WebTasksController()
{
}
public string WebGetData(string queryname, string queryterms)
{
}
} You will need to decorate the WebGetData() method for CORS to allow the clients to make requests from your GET method.
[HttpGet]
[EnableCors(origins: "*", headers: "*", methods: "*")]
public string WebGetData(string queryname, string queryterms)
{
} Enable CORS with the appropriate settings for your own particular application.
As we can see, the WebGetData() method has two parameters.
- queryname is a string that designates the type of query e.g. "getmileagebyuser", "getjourneybydate" etc
- queryterms is a serialised query object that contains the values needed to query (filter) the data e.g. the user ID, the date or whatever filters are required to satisfy the query request
Here's the class that I use for passing in the query filters.
[DataContract]
public class WebQueryTasks
{
[DataMember]
public Dictionary<string, object> QuerySearchTerms { get; set; }
public WebQueryTasks()
{
this.QuerySearchTerms = new Dictionary<string, object>();
}
} At its core it comprises a dictionary of named objects. By implementing a dictionary of objects this allows us to pass in filters for any type of data e.g. dates, ints, strings etc. We can also pass in as many filters as we need. We can therefore pass in multiple filters e.g. fetch all the journeys for a specific user for a specific date. In this example, we pass in two filters.
- The user ID
- The date
Once the query is serialised we have the following string which is then passed as the second parameter to the RESTful GET method.
{"QuerySearchTerms":{"email":"test@mycompany.co.uk"}} By implementing our queries in this way makes for very flexible code that allows us to query our data in any way we want.
var user = GetUser(emailaddress);
WebQueryTasks query = new WebQueryTasks();
query.QuerySearchTerms.Add("userid", user.Id);
query.QuerySearchTerms.Add("journeydate", Datetime.Now);
string queryterms = ManagerHelper.SerializerManager().SerializeObject(query); The WebGetData() method then needs to deserialise this object and extract the filters from within. Once we have extracted the filters we can then use them to fetch the data as required by the request.
WebQueryTasks query = ManagerHelper.SerializerManager().DeserializeObject<WebQueryTasks>(queryterms);
if (query == null || !query.QuerySearchTerms.Any())
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to deserialise search terms.")));
}
The core of the WebGetData() method is a switch statement that takes the queryname as its input. Then, depending on the type of query, the method will extract the necessary filters from the WebQueryTasks parameter.
The names of the queries are stored as constants but could equally be implemented an an enum if preferred. We don't want to have to hard-code the names of our queries into the method, so any approach that separates these is fine.
In the example below there are two queries. One returns company data for a specified user. The second returns company data for a specified company ID. In each case the code follows the same pattern.
- select the appropriate case statement in the switch
- extract the filters from the query
- invoke the appropriate backend service to fetch the date using the extracted filters (after firstly checking that the filter(s) are not empty)
- serialise the data and return it to the client
object temp;
string webResults;
switch (queryname.ToLower())
{
case WebTasksTypeConstants.GetCompanyByName:
webResults = this._userService.GetQuerySearchTerm("name", query);
temp = this._companiesService.Find(webResults);
break;
case WebTasksTypeConstants.GetCompanyById:
webResults = this._userService.GetQuerySearchTerm("companyid", query);
int companyId = Convert.ToInt32(webResults);
temp = this._companiesService.Find(companyId);
break;
default:
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError($"Unknown query type {queryname}.")));
} We then need to serialise the results and return these to the client.
var result = ManagerHelper.SerializerManager().SerializeObject(temp);
return result; In the production version of this controller, I have implemented many more queries in the switch statement, but for clarity I have only implemented two for the purposes of this article.
Here is the full code listing.
[HttpGet]
[EnableCors(origins: "*", headers: "*", methods: "*")]
public string WebGetData(string queryname, string queryterms)
{
WebQueryTasks query = ManagerHelper.SerializerManager().DeserializeObject<WebQueryTasks>(queryterms);
if (query == null || !query.QuerySearchTerms.Any())
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to deserialise search terms.")));
}
object temp;
string webResults;
switch (queryname.ToLower())
{
case WebTasksTypeConstants.GetCompanyByName:
webResults = this._userService.GetQuerySearchTerm("name", query);
temp = this._companiesService.Find(webResults);
break;
case WebTasksTypeConstants.GetCompanyById:
webResults = this._userService.GetQuerySearchTerm("companyid", query);
int companyId = Convert.ToInt32(webResults);
temp = this._companiesService.Find(companyId);
break;
default:
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError($"Unknown query type {queryname}.")));
}
var result = ManagerHelper.SerializerManager().SerializeObject(temp);
return result;
} Just to repeat, for the purposes of this article, the method above has had all error checking, logging, authentication etc removed for the sake of clarity.
I have implemented this pattern in all my GET APIs to great success. It is very flexible and allows me to query the data in multiple ways as neccesary. It also allows the client code to be simpler too, by reducing the client area (the client only needs to interact with a single endpoint / controller), and enforces consistency by ensuring that all queries are similar to one another (they must all pass in two parameters - the first designating the query type, the second containing the query filters).
This pattern of API design achieves all the following benefits
- Simpler server side code by producing substantially less code due to the generic nature of the pattern
- Simpler client side code by only having a single endpoint to interact with
- High degree of flexibility by allowing the APIs to filter the data any way the application requires
- Consistency by ensuring that all requests to the RESTful API are the same
I have been using this pattern in my own RESTful APIs for several years, including several production mobile apps (that are available in the stores) and line-of-buiness web apps. With the pattern in place, I can quickly and easily add new RESTful APIs. This makes adding new services to the apps more timely, and makes the process of adding value to the apps much quicker and simpler.
Feel free to take this idea and modify it as neccessary in your own RESTful APIs.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've been writing asychronous code with the .NET Framework for several years now, and find that the .NET Framework makes a good job of hiding the underlying conceptual details. The concepts are pretty straight forward if you undedrstand how asynchronicity works. As I've found over the years though, these concepts are not always well understood or applied by less experienced developers. By less experienced, I don't always mean junior developers. I've come across senior developers who have struggled with asynchronous code too.
I've helped several developers fix issues with their code that have been due to mis-understandings with asynchronicity, and I've found issues with their code during code reviews that have highlighted basic mis-understandings with implementing asynchronous code using the .NET Framework.
In this article I want to go through the basics of writing asynchronous code using the .NET Framework. I'll use C# to illustrate all examples, but conceptually the code will work the same when transposed to VB.NET or any other .NET language. I'll use examples from our ASP.NET Web API services code base which makes extensive use of asynchronicity to give performant and responsive code. Our mobile apps all rely on these services for delivering functionality to the end user's device. It is therefore incumbent on our apps to be highly responsive and performant. I have therefore made all these services asynchronous to effect these requirements. I may follow this article up in the future with more advanced scenarios, but for now, I will stick to the basics.
What is Asynchronous Programming?Let's start with some basic understanding of asynchronous programming. Most code gets executed in a sequential manner i.e.
- execute line 1
- execute line 2
- execute line 3
We have 3 lines of code that each execute some command, and they each run one after the other i.e. "execute line 1" is executed first. When this has finished execution then "execute line 2" gets executed. When this has finished executing then "execute line 3" is executed. These commands are run sequentially, one after another. This can also be referred to as synchronous code. The next line of code can only be executed when the previous line of code has completed.
var myList = new List<string>();
myList.Add("item1");
myList.Add("item2");
myList.Add("item3");
myList.Remove("item1"); A trivial example could be the code above. The first line creates a string list called myList. When this has completed the next 3 lines then add items to the string list (item1, item2 and item3). Finally, we remove item1 from the list. These lines of code are executed one after the other in a sequential (synchronous) manner.
When code is executed sequentially like this, one command after the other, we say that it has been executed synchronously.
We need to write our code differently when we interact with any kind of I/O device such as a file, a network or database. The same applies when we execute any CPU bound operations such as rendering high-intensity graphics during a game. We cannot make any guarantees about how quickly the device or operation may respond to our request, so we need to factor in waiting time when making requests to I/O devices or CPU intensive requests.
An anology may be making a telephone call to book an appointment to have your car serviced. Immediately after making your booking you need to then write down the date and time of the booking. You may get straight to the front of the telephone queue if you're lucky. Alternatively, you may find you are further down the telephone queue and have to wait to get through to the garage. Either way, you cannot write down the date and time of the booking until you have gotten through to the garage.
In this scernario you don't know exactly when you can write down the date and time of the booking as you may have to wait to get through to the garage.
And this is exactly how asynchronous code works.
When your code accesses I/O devices such as accessing a file, network or database (or makes a request to a CPU intensive operation) you cannot guarantee when your request will be serviced. For example, if you are accessing a database, there may be latency on the network, it may be hosted on legacy hardware, the record you are accessing may be locked and so on. Any one of these will affect the timeliness (or otherwise) of the response to your request.
If your network or database is busy and under extreme load, any request sent over it will be slower than requests made during less busy times. So it should be obvious that executing a command that relies on an I/O device immediately after submitting a request to that I/O device is likely to fail, as you may not have received any response from the I/O device.
Example
- connect to database
- fetch records from database
- close database connection
If you were to execute the above code synchronously, you could easily run into the situation where you are trying to fetch the database records before you have fully connected to the database. This would fail resulting in an exception being thrown. What you instead need to do is attempt to connect to the database, and ONLY when that has succeeded should you attempt to fetch the records from the database. Once you have fetched the records from the database, then you can close the database connection.
This is exactly how asynchronous code works. We can rewrite the above pseudo-code asynchronously.
- connect to the database
- wait for connection to database to be established
- once connected to the database fetch records from database
- close database connection
The two sets of pseudo-code look very similar, with the key difference being that the latter waits for the connection to the database to be established BEFORE making any attempts to fetch records from the database.
Hopefully by this point the goals of asynchronous programming should be clear. The goal of asynchronous programming is to allow our code to wait for responses from I/O or CPU bound recources such as files, networks, databases etc.
Asynchronous programming with C#Now that we understand the principles and goals behind asynchronous programming, how do we write asynchronous code in C#?
Asynchronous programming is implemented in C# using Task and Task<t>. These model asynchronous operations, and are supported by the keywords async and await. Task and Task<t> are return values from asynchronous operations that can be awaited.
Here's a function that POSTs data to a RESTful endpoint, and does so asynchronously. For the purposes of simplicity I have removed all authentication etc from the code samples I will use.
public async Task<HttpResponseMessage> PostData(string url, HttpContent content)
{
using (var client = new HttpClient())
{
return await client.PostAsync(new Uri(url), content);
}
} Things to note.
- The method returns a Task of type HttpResponseMessage to the calling program i.e. the method is returning an instance of HttpResponseMessage (e.g. an HTTP 200 if the method was successful).
- The async keyword in the method signature is required because the method invokes the PostAsync() method in the method body i.e. the method needs to await the response from the RESTful API before the response can be handed back to the calling program.
To call this function we write the following code.
var response = await PostData(url, content); The calling code (above) needs to await the response from the PostData() method and does so using the await keyword. Whenever you invoke an asynchronous method such as PostAsync(), you need to await the response. The two keywords go hand in hand. Asynchronous methods need to be awaited when they are invoked.
Here's another RESTful API method that fetches some data from a RESTful endpoint. The RESTful endpoint returns data in the form of a serialised JSON string (which the calling program will then de-serialise back into an object).
public async Task<string> GetData(string url)
{
using (var client = new HttpClient())
{
using (var r = await client.GetAsync(new Uri(url)))
{
string result = await r.Content.ReadAsStringAsync();
return result;
}
}
} Things to note.
- The method returns a Task of type string to the calling program (the JSON serialised response from the RESTful endpoint).
- The async keyword in the method signature is required because the method invokes the GetAsync() and ReadAsStringAsync() methods in the method body.
To call this function we write the following code.
string response = await GetData(url);
if (!string.IsNullOrEmpty(response))
{
} The calling code (above) needs to await the response from the GetData() method and does so using the await keyword.
Key takeaways- Async code be can used for both I/O bound as well as CPU bound code
- Async code uses Task and Task<t> which represent asynchronous methods and are the return values from asynchronous methods (as we saw in the PostData() and GetData() methods)
- The async keyword turns a method into an async method which then allows you to use the await keyword in its method body (as we saw in the PostData() and GetData() methods).
- Invoking an asynchronous method using the await keyword suspends the calling program and yields control back to the calling program until the awaited task is complete
- The await keyword can only be used within an async method
This is the first in what will hopefully be a series of articles I intend to write on asynchronous programming. I will cover other areas of asynchronous programming in future articles (giving tips, advice, advanced scenarios and even Javascript). Hopefully for now, this has given a taster of how to implement the basics of asynchronous programming with C#. Watch this space for further articles.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently came across some strange behaviour in our ASP.NET Core 2.2 web application. A colleague of mine who was working on some new functionality, had checked in several Javascript files. These were 3rd party Javascript files to add support for drag & drop. The majority of the files for this 3rd party library were already minified, with the exception of one.
For some reason this one particular Javascript file was not minified. So we added the file to bundleconfig.json in Visual Studio so that our build process would minify the file. The bundleconfig.json minifies several Javascript files and outputs the aggragated file as site.min.js. Whilst I was testing the latest version of the app I was getting all sorts of errors in the browser as many of the Javascript functions were not being found. This seemed strange, as everything had been working perfectly, and all we had done was check in a few Javascript files.
Looking at the site.min.js file that was on the build and test servers, it became apparent that the site.min.js file contained only the contents of the un-minified 3rd party Javascript file. All of the other files we were minifying had somehow been removed from the resultant site.min.js file.
After much investigation I narrowed down the issue to the following command in our build pipeline.
dotnet publish -c release This command was recreating the site.min.js file, but failing to include all the files specified in the bundleconfig.json with the exception of the un-minified 3rd party Javascript file. I excluded this step from the build process to check, and sure enough, the culprit was definitely this build command.
I managed to solve the problem by manually minifying the culprit Javascript file and adding it to the project in its minified form. I then excluded it from the bundleconfig.json minification process. This has now solved the problem, and everything works perfectly again.
So basically, if you're including 3rd party Javascript files, make sure you add them to your Visual Studio project in minified form (unless you're using a CDN of course). Don't attempt to minify 3rd party files in your build process. Only minify your own Javascript files in your build process. It took me a few hours to diagnose and fix the problem, so hopefully by reading this, I may save someone else the same pain I went through fixing the problem.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whenever I'm mentoring a more junior member of the software development team, there are always two primary traits that I encourage them to learn. These are traits that transcend programming language, methodology, technical stack or anything else that may be relevant to their role. These are structure and dilligence. Both of these should permeate through everything they do in their every day work. Being mindful of these will help them become better as software developers. I will explain why these traits are so important to the software developer.
Structure
Approaching your work with a structured mindset allows you to demarcate and separate out the various elements to the problem you are solving. From grouping the different areas of the requirements specification, to grouping the components and classes in the class hierarchy, to grouping the related unit tests....having structure allows you demarcate the boundaries between these different elements. Everything has a structure. The trick is to clearly define it and communicate this to the rest of the team. If you are documenting the requirements to a piece of functionality, clearly structure the document to demarcate these different areas e.g. functional requirements, non-functional requirements, UI considerations etc. If developing a new component, your class structure should clearly demarcate the different behaviours and areas of responsibility from the class structure and their interactions. Anyone reading through the code should be able to quickly determine what the different classes do and how they relate to each other from the structure you have implemented. Group similarly related elements together and enforce this in your coding standards document. Everything you do should be structured, logical, and consistent.
Dilligence
Approaching your work with due care and dilligence will help in eliminating mistakes and make you a better developer. Be conscientious and mindful of what you are doing at all times. Before checking in that code, make sure you do a diff, run a full rebuild and execute all dirty unit tests. This may take additional time, but it will always be quicker than the time it will take to fix a broken build. If writing a document such as a requirements specification, take the time to proof read it, check it over for spelling and grammar as well as accuracy. Work smart, not fast. Reducing the number of mistakes you make by being more dilligent will earn you a reputation as someone who is dependable, produces high quality and takes their role seriously. Don't be that person who is known to constantly make mistakes, breaks the build or submits code that doesn't work because they didn't test it sufficiently enough.
By applying structure and dilligence to everything you do will have positive benefits on your work. These can be applied irrespective of your particular role (developer, tester, designer) or what tools and / or technologies you use. I would prefer to work with a developer who took these traits seriously than a developer who thinks producing more lines of code than the next developer makes them more productive. I would always pick quality over quantity. A customer is far more likely to forgive a slippage of a dealine if they eventually get something that is of high quality, rather than something delivered on time that contains bugs.
Be structured and dilligent and apply these with rigour and I can guarantee that the quality of the code produced by yourself and your team will increase.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
At what point should you consider rewriting a software application? Just because an application is old, doesn't mean it should be rewritten. So what factors ought to be considered when attempting to justify rewriting an application? There are many things to consider, and rewriting any application should never be taken lightly. In fact, doing a cost-benefit analysis is probably a good starting point.
In today's fast paced software development environment, today's latest fad can quickly become tomorrow's long forgotten hype. What does it even mean to be legacy? According to Wikipedia
Quote: a legacy system is an old method, technology, computer system, or application program "of, relating to, or being a previous or outdated computer system" This article is not intended to be a detailed discussion of the considerations to take into account when looking to rewrite a legacy applicaiton. That would be a considerably lengthy article. Rather it is to look at some examples from my own experiences involving legacy applications. Like most developers, I thrive on working with the latest shiny new tools, but there are also times when you need to work with that legacy application too. I have heard many developers berating these legacy applications. Sometimes for good reason too. But quite often, the legacy application has been working away for years, quietly, solidly and not caused a fuss.
I've worked with many legacy applications over the years. Some were surprisingly good, some just plain awful. Some of them, despite their age, were rock solid and were capable of running far into the future. Others spluttered and juddered their way along and needed a lot of man-handling to keep them running.
Just because an applications is legacy is not reason enough to justify a rewrite. I remember working for one particular company where the business critical back-end application was developed in COBOL. It was over twenty years old but rock solid. It rarely caused problems or generated errors. It just worked.
Another company I worked for many years ago also had a lot of legacy code (and according to sources at the company, much of the legacy code is still there to this day). The code was part of their core business logic and had been around for over a decade. This was accountancy and financials logic, and whilst the code had been updated with bug fixes over the years, it didn't require much man-handling to keep it up and running. In fact, when they decided to upgrade the application to use newer development environments and tooling, they kept much of the legacy code as they knew it worked. They didn't want to risk screwing up their core business logic by rewriting it.
Age alone is not a deciding factor when considering whether to rewrite an application. There are many legacy applications that run just fine with few problems. Alternatively, there are a great many applications developed with modern technology and tooling that are plain awful.
A few things to consider.
- Is the application code buggy and / or cause regular problems or errors?
- Does it require man-handling to keep it up and running?
- Does it meet non-functional requirements i.e. is secure, performant etc.
- Is it easy to extend and add new features?
- Does it require legacy hardware that may be insecure?
- What are the running costs of the application (development costs fixing bugs, server / hardware costs, third-party costs etc)
- Does it interact with third-party applications that may have updated their APIs?
- Has it been developed using outdated environments or tools?
Not all of these considerations will be applicable to every scenario, so don't take the list in its entirety. They are merely intended to be conversation starters to elicit further discussion. Deciding whether or not to rewrite an applicaiton is not a decision that should ever be taken lightly, but equally you need to take into account many different pieces of information and assess them in the context of the bigger picture. Age alone is not a compelling argument for a rewrite, but taken in the context of other factors, may form one of them.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
There's an approach that I have been using for several years now that has helped me improve and simplify my stored procedures. This is for stored procedures that return data i.e. SELECT stored procedures as opposed to INSERT or UPDATE stored procedures. This approach is particularly useful where a stored procedure needs to reference more than one table i.e. where there is a JOIN between one or more tables.
Firstly I create a VIEW of the data that I want to query. The VIEW contains all the tables, columns, JOINs etc as necessary. It is from this VIEW that the stored procedure will SELECT its data as necessary. All the stored procedure needs to do then is filter the data from the VIEW with a WHERE clause.
The advantages of this approach is that the VIEW hides the underlying details of all the JOINs. The stored procedures then become simple affairs as they simply SELECT from the VIEW. This leads to simpler stored procedures, and allows a VIEW to be reused across multiple stored procedures. Therefore you don't need to repeat the same complicated JOINs in each of your stored procedures.
Example VIEW
CREATE VIEW [dbo].[v_CardDefinitions] AS
SELECT
CardDefinitions.*,
Cards.ID AS CardID,
Cards.ParentID,
Cards.[Index],
Cards.UserID,
Cards.CardDefinitionID,
Users.Email AS UserEmail,
Modules.Name AS ModuleName
FROM
CardDefinitions
LEFT JOIN
Cards ON CardDefinitions.ID = Cards.CardDefinitionID
JOIN
Modules ON CardDefinitions.ModuleID = Modules.ID
LEFT JOIN
Users ON Cards.UserID = Users.ID
WHERE
CardDefinitions.Active = 1 Example stored procedure
CREATE PROCEDURE [dbo].[Cards_GetById]
@cardId INT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT
DISTINCT ID, Name, [Permissions]
FROM
v_CardDefinitions
WHERE
ID = @cardId
END So to summarise the approach.
- Create a VIEW of the data that JOINs all the necessary tables
- Create a stored procedure that SELECTs data from the VIEW by filtering the VIEW using WHERE clauses
This is an approach that I use regularly as it simplifies the stored procedures I need to create.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a requirement to update multiple tables with the same value. We have a table that stores information about documents (Excel documents, Word documents, text documents, images, reports etc). Every document has an owner associated with it. This person has admin privileges over the document. After a discussion with one of our users, they wanted the ability to change the owner of a document. Doing this at the level of a single document is straight forward. However, the user wanted this for multiple documents. For example, if a user is due to leave the business, they wanted the ability to change the owner of all their documents to a new owner.
I therefore needed the ability to pass a list of document IDs into a stored procedure. The stored procedure would then change the owner for all the documents in the list to the specified owner. Passing in the comma-delimited list of document IDs wouldn't be difficult, as this is essentially a long string. The tricky part would be to iterate through the items in the list i.e. to fetch each document ID from the comma-delimited list so that the owner can be updated.
The first thing I needed to do was to create a function that could iterate through the list. I create a Table-Valued-Function (TVF) called Split to achieve this. If you don't already know, a TVF is a function that returns a table (as the name suggests). In our case, we will return a two column table containing a unique ID and an item from the list. So if there are 10 items in the list, then there will be 10 rows in the table returned by our TVF.
CREATE FUNCTION [dbo].[Split]
(
@List nvarchar(2000),
@SplitOn nvarchar(5)
)
RETURNS @RtnValue table
(
Id int identity(1,1),
Value nvarchar(100)
)
AS
BEGIN
While (Charindex(@SplitOn,@List)>0)
Begin
Insert Into @RtnValue (value)
Select
Value = ltrim(rtrim(Substring(@List,1,Charindex(@SplitOn,@List)-1)))
Set @List = Substring(@List,Charindex(@SplitOn,@List)+len(@SplitOn),len(@List))
End
Insert Into @RtnValue (Value)
Select Value = ltrim(rtrim(@List))
Return
END The function has two paramters. The first is the comma-delimited list of document IDs
@List = '1, 2, 3, 4, 5' The second parameter is the delimiter. In this case we are passing a comma-delimited list hence the delimiter is a comma.
@SplitOn = ',' The function loops through the list locating the next item by searching for the next occurrence of the delimiter. It keeps doing this until it cannot find any more occurrences of the delimiter. Each item it finds between the current and next delimiter is inserted into the table that will be returned by the TVF.
We next need to write a stored procedure that invokes our Split Table-Valued-Function.
CREATE PROCEDURE [dbo].[Documents_UpdateOwner]
@owner INT,
@documentids NVARCHAR(1000)
AS
BEGIN
UPDATE
Documents
SET
UploadedBy = @owner
WHERE
ID IN (SELECT CONVERT(INT, Value) FROM Split(@documentids, ','))
END There are two parameters to the stored procedure. The first one is the ID of the new owner for the documents. The second parameter is a comma-delimited list of document IDs for which we wish to change the owner. The items returned from the Split TVF are stored in string format. Therefore if we need to update data in another format we need to do a conversion. In our case, we are updating an INT and therefore need to convert the item from an NVARCHAR to an INT. Obviously we wouldn't need to do any conversion if we were comparing against string data.
I have since used this Table-Valued-Function in other stored procedures where I need to iterate through a list of items. It's a very efficient way of updating multiple tables. Instead of having to make multiple calls to a stored procedure to update each document owner, I can instead make one call to a stored procedure and update all of them at once. This is a neat way to allow for those scenarios where you need to update data from a list of items.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had some plumbing work done in my house that made me think of a similarity between software development and plumbing. I realise they are fundamentally different beasts, but bear with me. Whilst talking to my plumber, he was showing me the differences between the work he had done, and the work done on one of the other houses in the street where I live. Even as a complete novice I could see the differences he was describing. He wasn't trying to be disrespectful or mean to the other plumber (he didn't know him as he had never met him), but merely demonstrating how high his quality of work was using a direct example.
- The holes made in the brickwork in my house were neat and the pipes fitted tightly through with no gaps. In the other house they were rough and there were gaps where the pipes came through.
- Where my brickwork needed replacing outside my house, these has been replaced with identically coloured bricks and you couldn't see any differences when looking at the wall. On the other house, the bricks had been replaced with differently coloured bricks and the bricks had been replaced so the interlacing (bricks are laid in an overlapping manner vertically for strength) had been broken.
- There were no pipes running outside my house. The pipes running outside the other house were left totally exposed to the elements as they were not protected with lagging.
I'm sure there were similar differences inside the houses too.
The point I am making is that my plumber showed care. His work was of a very high standard and demonstrated diligence and work ethic. The other plumber was satisfied with far lower standards. For him, close was good enough.
This same comparison can also be made with software development. When I write code, I take care to ensure that my code is well organised, structured and readable. I ensure that there are unit tests that exercise an adequate level of code coverage. I implement best practices and aim to be consistent.
When I look at a piece of code, I can very quickly determine if there was care put into it. Sloppy, ill thought out code that is inconsistent and unstructured are amongst some of the signals that reveal such a lack of care. Even as a novice, you can still demonstrate a level of care within your work. This is not about how knowledgeable or experienced you are, but how dilligent you are. It is still entirely possible to write code with care and attention to detail despite being inexperienced.
As a professional software engineer, I want others to look at my code and think "Hey this guy has put a lot of effort and care into writing this". It will have my name against it. I have high standards, and I expect the same from every other developer on the team. I have taken it upon myself to write the coding standards document that we all follow as a team. Not by dictatorship, but by democracy.
When you have checked in your code, take a moment to reflect what another developer would think of it. What would they think when looking at your code? What does your code say about you and your work ethic? Our bread and butter is our code. The care, love and dilligence that we use to craft it speaks volumes about us as professional software developers. Make sure that when another developer looks at your code, that at the very least they will say that this guy cared about what they were doing.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 8-Mar-19 12:24pm.
|
|
|
|
|
Following on from an earlier article[^] I wrote about versioning a .NET Core 2.0 application, I have now had to revise this since the method I used for that version of the application is not supported in .NET Core 2.2. In that article, I demonstrated how to use a tool called setversion[^] for versioning a .NET Core 2.0 application. After upgrading our application to .NET Core 2.2 I found out that this is not currently supported any more.
Instead of using the setversion tool, I am using the dotnet publish command-line utility. When using this command-line utility, you are able to specify a version number.
I am still using the same build script as described in my previous article, and this is invoked from our TFS build server in the same manner. Just to reiterate, within TFS you have the ability to pass arguments to your Windows batch files. I am passing the build version number $(Build.BuildNumber) as the argument.
I then invoke my Windows batch file (called setversion.bat)
@echo off
cls
ECHO Setting version number to %1
cd <projectFolder>
dotnet restore
dotnet publish <project>.csproj --configuration Release /p:Version=%1 This all works perfectly, and the deployed application assemblies are stamped with the correct version number.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In my previous article Sending Push Notifications with Azure Notification Hub[^] I briefly described our rationale for selecting Azure Notification Hub over alternatives. I have now fully implemented an ASP.NET Web API service for sending push notifications as well as managing their associated tags.
The service provides the following functionality.
- Send push notifications to either Android or iOS devices (with or without tags)
- Adds tags
- Removes tags
If you aren't familiar with the concept of tags where push notifications are concerned, you aren't alone. I hadn't heard of them either until I started working with push notifications. The concept is surprisingly simple, yet provides great flexibility in how you target where your push notifications are sent.
When a device is registered for push notifications (via code running on the device), you can optionally assign tags with the device registration. This is a list of characteristics (or interests) that the device wishes to receive push notifications about. Tags can either be set by the user (perhaps via a system preferences page where they can tick boxes to select the items they wish to receive push notifications about) or by the backend (where we can set characteristics to allow us to target specific devices(s) when sending push notifications).
In our case, we have implemented the latter i.e. we are adding tags that relate to the user's device to allow us to send targetted push notifications. For example, we have added tags that specify the user's ID, their company ID etc. This allows us to send a push notification to a specific user's device (by specifying the user's ID) or to all the user's for a specific company (by specifying the company ID).
When a push notification is sent, you can specify a tag alongside your push notification message. The push notification is then only sent to any registered devices that have expressed an interest in that particular tag. So in our case, we can send a message to a specific user by supplying their ID as the tag. Or we can send a push notification and supply the company ID, thus ensuring that the push notification is only sent to user's of that specific company. We can slice and dice the demographics of our user base in any way that we find meaningful by simply registering the device with the desired tag(s).
This is a powerful way of decomposing the demographics of your user base. You can now explicitly categorise your user base by the tags they have registered with. By doing so, this then allows us to send targetted push notifications, right the way down to a specific user's device.
The service that I have implemented manages these tags, as well as providing the ability to send the push notifications themselves. The service therefore allows the backend to add and / or remove tags from a user's device. For example, when a user logs in on a device, the service is invoked to register them with various tags according to the information we hold on them. Likewise, we will remove those tags when they sign out.
This process is very straight forward, yet gives us an incredible level of flexibility for sending targetted push notifications to our users. If you have't already looked into the concept of push notification tags, then I'd definitely have a look at them. They're a great idea.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In the latest version of the Xamarin Forms app that I am working on, we wanted to send push notifications to the devices. There were a couple of approaches that we could have taken. The key ones being Twilio (which we are already using for sending SMS messages) and Azure Notification Hub. After some initial exploration, the clear choice was Azure Notification Hub. Unsurprisingly it had tight integration with Xamarin Forms and the Microsoft ecosystem, and was very straight-forward to configure and get working.
There were also very good examples of how to make the necessary code changes to the respective Android and iOS projects to ensure we got this working quickly.
The beauty of working with Azure Notification Hub, is that this abstracts us away from the underlying details of the Android and iOS platforms. Instead, once we had made the necessary configurations and setup changes to enable push notifications for each platform, we then integrated the platform specific push notification engines into Azure Notification Hub. From this point onwards, we only have to work with Azure Notification Hub. This gives us a far simpler and cleaner abstraction onto our notification setup.
It is very simple to setup and send test push notifications to your registered devices using Azure Notification Hub. We have also intergrated App Center event tracking for all device registrations and sending of push notifications. This gives us a helicopter view of what our code is doing under the hood, and to help us diagnosing any errors should they arise.
The step-by-step tutorials I used can be found here[^].
So if you're looking to implement push notifications in your mobile app, give Azure Notification Hub a try.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
With the imminent release of our latest mobile app, I thought I'd summarise how we ensured high levels of quality, and proved that the software was correct. I'm not going to write an article justifying the case for unit testing (it should go without saying that unit testing is a fundamental part of the development process - if not you're doing it wrong), but rather to explain how we implemented unit testng within the software for the app.
The architecture I favour when designing an application, is to firstly reduce the surface area of the client[^]. Simply put, this entails keeping the UI code as sparse as possible, and removing any / all code that is involved with the domain. The UI should ONLY contain code that relates to the UI. While this sounds straight forward, I have lost count of the number of times I've come across code bases where the UI contains code from the domain and / or the data layer.
In relation to a Xamarin Forms mobile app, you should keep the code in the Views as sparse as possible. The UI code should only invoke your domain code, it should NEVER implement it. Your Xamarin Views should contain code for manipulating the various UI controls, populating them with data etc. As soon as there is a need for anything beyond this, then refactor the code and place this code in a completely separate layer of the app. Within the context of a Xamarin Forms app, I created separate folders for such things as the models, services, entities etc. These were completely separate to the Views.
To enforce this separation of concerns, we adopted the MVVM design pattern. I won't go into great detail here about this pattern (as there are many articles out there already). The MVVM pattern stands for
Model -> View -> View-Model
More correctly it could be named VVMM (View -> View-Model -> Model) as this is the order in which they relate to each other (in terms of dependency). The Model should have no knowledge of the View-Model. The View-Model should have no knowledge of the View. This is important when implementing an MVVM application, as it reduces the dependencies between the various parts of the application.
The View in a MVVM designed app is the UI element, or in the case of a Xamarin Forms app, they are the Views. Only UI code should be placed in the Views.
The View-Model is the place where domain logic will reside. All UI controls should be bound to properties in the View-Model. The code that provides your UI controls with data, hides/shows the UI element etc should all be implemented here. This way, you can unit test those rules and ensure that they are correct. And this is done without the need for the UI to be present. This means you don't have to keep using the simulator or physical device to test the domain rules of your app. You should be able to unit test these rules in the absence of the UI, and in complete isolation from other parts of the application. The unit tests should require minimal setup, and any dependencies should be injected into the methods to remove hard-wired dependencies. This is good old fashioned Dependency-Injection, and it is a vital design pattern when implementing unit tests. This ensures the correctness of your domain.
The Model is concerned with the data, and therefore maps your data entities into classes. The Model will contain such things as definitions for customer, order, supplier etc. The Model should not be concerned with how it is used by the View-Model or View. For example, you may have an Order class which contains an Order-date. This is stored within the Model as a Date type. The fact that this date is displayed as a string in the UI is of no concen to the Model. Any conversions needed to map Model properties into UI elements should be implemented by the View-Model (you may have a conversion needed by several elements or Views, so it makes sense to place this conversion code within a View-Model where it can be invoked from multiple places). Again, these conversions can be unit tested with complete independence from the UI by placing them in the View-Model. You can write unit tests against the Model to ensure that the values you set against it match those that are returned. So if you set the Order-date of your Order to a specific date, you can assert that this date is returned by the unit test. This ensures the correctness of your underlying data.
Unit testing a mobile app need not be difficult as long as you have carefully designed and architected the various moving parts and separated the key concerns. Implementing an architecture that supports separating out the various concerns is vital (layering). It's also useful to implement a design pattern that enforces such layering (such as MVC, MVVM). You should aim to keep your UI as sparse as possible, and place all code that is not involved in the UI elsewhere within the application.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've been developing mobile apps for the Android and iOS platforms for several years now. I have used both Telerik Platform (now retired) and Xamarin Forms. Both of these are excellent development platforms. Most recently, I have been developing apps using Xamarin Forms. Most of the code for the app in a Xamarin Forms app is contained within a single, shared project. This code is shared between both the Android and iOS apps. When you require platform specific behaviour, you place this code in the Android or iOS specific project as required.
During the development of the latest app, we have hit several issues as you would expect. Some small, some not so small. Android development is pretty painless, intuitive, conforms to well defined best practices and standards. We have hit a few snags with Android, but these have been relatively small and easy to fix.
Apple however is a whole different can of worms. Nothing they do seems to conform to any well defined standard or best practice. They have this habit of almost deliberately ignoring the well defined and understood patterns and practices from other development platforms, and doing it "their way". It's fair to say that the "Apple way" is usually vastly more time consuming, complicated and error prone. The Apple motto seems to be the total inverse of Occam's razor.
When given two or more ways of solving a problem, always choose the worst option.
From provisioning profiles and certificates to asset catalogues (I have never encountered a worse way of storing images than this), the "Apple way" is never simple, straight-forward or intuitive.
Nearly every issue or bug we have encounterted has been with the iOS version of the app (on both Telerik Platform and Xamarin Forms). The Apple platform just doesn't seem as robust as Android (which just works).
I am assuming that the majority of Apple developers don't get much exposure to other development environments, and probably build mainly Apple apps. They therefore never get to experience how things "should" be. If you only know the "Apple way" of doing things, then you have nothing else for comparison.
I have worked within development for approaching 20 years now, and in that time have used pretty much every platform, tool and technology at some point. I therefore have a broad knowledge of what is considered "best practice" by my exposure to the huge number of technologies over the years. I know what works, and how things ought to work. I can spot efficiency, good design, simplicity and elegance from afar.
This is why I am of the opinion that the Apple way just sucks. Doing something differently merely for the sake of it is not innovative. There are very good reasons why certain ideas become best practice within the development field. It's because they work. And not just work, but are well understood and accepted by those working within the industry. They have been put to the test, and been successful.
In all my years as a professional software developer, engineer and architect, I can honestly say that I have never come across a development platform as poor as that provided by Apple. If you genuinely think Apple make great development products, then I'd suggest having a look at how everyone else builds their development tools. Microsoft and Google for example build excellent development tools, and they employ industry best practices and standards in their processes and workflows.
Unfortunately, while Apple remains a player in the mobile app space, developers such as myself will just have to put up with the "Apple way" of doing things. I think Apple would do well to take a look around at the other players in their industry and take some inspiration from them. Until they do, they will continue to frustrate developers who find the "Apple way" cumbersome, time consuming and inefficient.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whenever I hear discussions relating to the prevalent censorship and bias at the hands of the tech giants (Facebook, Twitter, Google et al), an argument I hear repeated is that they're private companies and can do whatever they want. Yes they are private companies, but I don't think that's a sufficiently powerful nor persuasive argument for allowing them off the hook. If you're unaware of the bias and censorship within Silicon Valley then read read my article[^] where I cover these issues.
Here's why I think anyone proposing that particular argument is wrong.
- Google is the number one search engine across the entire planet, and as such has a large share of the internet-search market. They can control (and censor / filter) their searches to disseminate their own political narrative with ease. Unlike going to the local baker's to buy a cake, if you get refused for some reason, you can just go to the baker next door and try again. Saying Google is a private company and can therefore have total control over what they do is a little naive. Google are very secretive about how their algorithms work and will no doubt refute any claim that their searches are biased. But you only need to compare the results from Google with that of a neutral search engine (such as DuckDuckGo) and you will see the stark contrast when comparing searches for political terms (I covered this in my previous article).
- The tech giants are more than just tech companies. They are highly influential agents that shape our cultural, political and social landscapes. They step far outside the technical arena in how they shape and influence our day-to-day lives. Many people today get their news from their social media platform of choice e.g. Facebook, Twitter or via organic search via Google. This places them in very influential positions. Rather than merely informing us about the state of current events, they can influence them to fit their own political agenda. This is no longer acting as a neutral observer, but an agent of change and influence.
- As we have recently seen with the de-platforming of Gab.com, the tech giants will collude to crush their competitors. Gab has been de-platformed by (amongst others) Microsoft, Apple, Google, Paypal and Patreon. If this happended in any other industry, there would quite rightly be a public outcry. For some reason, this behaviour seems to be accepted within the tech industry (but only if you have the "right" politics). You can't have choice in the marketplace, when the technical oligarchs at Silicon Valley will actively crush that competition. So the argument for "Private companies can do what they want" only really applies when there is true competition and an open and fair marketplace. Silicon Valley provides none of these.
So stating that the tech giants are private companies, for me at least, doesn't constitute a valid argument when considered against the points I've made here. They do not operate within the boundaries of a market where there is anything approaching competition. They have huge power and influence that they wield to perpetuate their political agenda. It is this same power that they use (in collusion with other tech giants) to silence and crush their competitors.
I'll keep posting my usual technical articles, but from time to time I will continue to delve into the political side of things with articles such as these. I'm genuinely interested to hear other people's opinions on these matters so feel free to share and discuss your own views on these topics.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
The latest version of the app (which will replace the current app that is in the app stores) is nearing completion. We are into user-acceptance with key stake-holders from around the business. The journey from beginning the app several months ago, to now, has involved a great deal of learning. Although we had an existing app on which to base our development efforts, that's where the similarities ended. Many of the technologies used for the new app were either brand new, or very different from when we last used them.
- Xamarin: Although I have used Xamarin previously (long before Microsoft decided to acquire it), it is vastly different now than it was then. It's fair to say that in its current Microsoft incarnation, much of the Android and iOS specifics are abstracted away from the developer and bore little resemblance to the version I used all those years ago. So whilst I needed to refresh my knowledge of Xamarin as it had changed substantially since I had last used it, it was brand new to the rest of the development team.
- App Center: This is Microsoft's build / test / deploy center for mobile apps. This is an absolutely brilliant tool. We used this throughout our development lifecycle for all of our diagnostics and debugging. We added tracking for all our events, service calls and exception handling. App Center allows you to setup and configure analytics for your crash reporting as well as for event tracking. This was very useful when we needed to diagnose exceptions and errors during the development cycle. We also configured our Azure DevOps build to deploy to App Center. So with each code check-in, upon a successful build, we would have an Android and iOS release ready for testing.
- Telerik DataForm: Is a means of simplifying the development of your data-entry forms. You define the properties of your data-entry form in your model class (and decorate your properties with the necessary validation rules and label-text). This model then forms the basis of your data-entry form. Telerik DataForm then takes your model and generates the necessary UI controls for your model, and hence generates your data-entry form. Including the validation rules and label-texts. Your UI is therefore built from the programmatic definition of the underlying model. This is an incredibly powerful paradigm. It frees up the developer to focus on the model's rules and validation, and delegates the building of the UI to Telerik. This paradigm is not suitable for every form, but for simple, static data-entry forms it is perfect. Telerik DataForm implements the MVVM design pattern, thus your forms consist of the following logical pieces.
- View (the XAML layout and code-behind)
- View-Model (where you define the rules for your data-entry form)
- Model (where you define the data to which your UI elements are to be bound)
- Azure AD B2C (Identity Provision): We have previously setup Azure AD B2C (Busines-2-Consumer) for one of our line-of-business web apps. This allowed us to delegate the login functionality to Azure. Rather than implementing our own login functionality, we configured the web app to use Azure AD B2C instead. This gives us an incredibly secure app as you would expect. We are leveraging the same login functionality that is used daily by 2 billion Office365 users. We decided to use the same Azure AD B2C functionality in our mobile app. This gives us far higher security, scalability and we don't have to write a single line of code. Perfect!
We also trialled Azure DevOps for this project. All our source code, build and release definitions were defined here. Although I have used Team Foundation Services previously, this was my first time using Azure DevOps, and was my first time defining builds and releases for Android and iOS.
So it's fair to say that we had many (steep) learning curves on this project. Despite that though, they were the right decisions, as the new app puts us in a far stronger position both technically and strategically. From the development platform to the technology ecosystem, the new app is a far stronger proposition.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
For the record, and before I embark on this article, I would like it noted that I am a professional software engineer who works within the field of software development. I have done so for nearly two decades. I am a geek with a genuine passion for technology. I get enthused by technology, and wouldn't want to be in any other field.
With that out the way, let's get on with the article. I don't generally write about politics, and for very good reason. Like religion, politics can be a very controversial subject. It can be polemic and can often escalate to hyperboplic arguments. I have my political views, but don't wish to use this platform to air them. I do, however, from time to time, voice them over on my Twitter and Gab feeds. Over the last decade, I have seen many small, incremental changes from many of the tech giants that have made me question whether they provide a net positive for the world. Unless you have lived under a rock for the past few decades, you cannot have failed to realise how immersive technology is in our every day lives. We use technology for our personal lives, social lives, communications, gaming, entertainment, searching for news and information and so on.
Over the past decade, the tech giants including Google, Facebook and Twitter have come to dominate not just the technical arena, but the social, cultural and political ones as well. It is no secret that these technical corporations are liberal and left leaning in their political makeup. How can an organisation that is composed of thousands of people be said to have a single political bias? Surely with so many people working for them, you would think there would be large variation in political diversity? It would seem that this is far from the case. Despite being told that "Diversity is our strength" by those on the political left, this doesn't apply to political diversity. Yes there may be gender, religious and racial diversity, but there is very little in the way of political diversity. And herein lies the problem.
Twitter CEO Jack Dorsey has openly admitted that there is 'left leaning bias' within Twitter, but then goes on to state that this doesn't influence company policy. I think Jack is being more than a little economical with the truth if he thinks Twitter's left leaning bias doesn't affect company policy. If you're a conservative, a Trump supporter, Republican, or right-of-centre in your political compass, it is fair to say that Twitter can be a very unwelcoming place. In fact, it can often be a downright hostile place. Many right leaning Twitter users have faced bans, shadow bans or been outright kicked off the platform (Alex Jones, Milo Yiannopoulos, Gavin McInnes, James Woods (the actor - although he has since been reinstated) and Jesse Kelly) to name just a few. Even President Trump is not immune from the threat of being kicked off the platform[^].
New York Times op-ed Sarah Jeong made many openly, anti-white, anti-male tweets[^] earlier this year but didn't receive a ban or even a suspension. Some of her tweets included:
- “#cancelwhitepeople”
- “1. White men are bulls—. 2. No one cares about women. 3. You can threaten anyone on the internet except cops.”
- “Oh man. It’s sick how much joy I get from being cruel to old white men”
- Dumba— f—ing white people marking up the internet with their opinions like dogs pissing on fire hydrants.”
It should be noted that Sarah Jeong's account is a verified, blue check-marked account. So whilst Twitter bans people from its platform for wrong-think in many other areas (particularly identity politics), it rewards people like Sarah Jeong by verifying their accounts. As long as your racism is towards white people, and your sexism is towards men, then you're all good. In the world of Twitter, hate speech does not include white men.
Back in 2017 Google sacked one of its software engineers - James Damore - for sending out a memo that related to Google's diversity policies. Specifically, it related to the gender differences between men and women, and why women were under-represented in the field of software engineering. To anyone who has read (and understood) the science of gender differences, it won't come as any surprise that men have a greater interest in this field than women. Men (on average) have a greater interest in "things" (cars, computers etc) and will tend to gravitate to those professions including STEM (science-technology-engineering-mathematics). Whereas women (on average) have a greater interest in "people" and tend to gravitate to those professions such as law, medicine, social care etc. There is nothing inherently wrong with any of this. If you accept that men and women are different (and there are many who don't accept this self-evident premise), then it stands to reason that their biological differences will lead to differences in their average proclivities and interests. Google it would seem however, don't seem to accept this. It is this hive mind that has been referred to as Google's Ideological Echo Chamber[^].
Other examples of Google's bias include the fact that they recognise International Women's Day (by displaying an appropriate image on their home page), but don't recognise International Men's Day. There are more virtue signalling points to be gained from recognising the former than the latter.
Google searches are notoriously biased in the search results they return. In just one specific example, when asked to define the term "nationalism", the results between Google (politically biased) and DuckDuckGo (politically neutral) couldn't be more stark[^]. This was just for a single term. Imagine scaling this up to the millions of search results carried out on the Google platform everyday. At this point Google stops being a search engine, and instead becomes a political tool. Giving you the results it wants you to have. To me this is terrifyinhg. Google is the most powerful internet platform on the planet (forget Twitter, Facebook, Microsoft). Google owns the internet. The fact that it is so blatantly partisan reminds me of Big Brother in 1984. I no longer use Google for my search engine. I now use DuckDuckGo.
In the US, free speech is protected under their First Amendment. This covers speech that could be defined by some as offensive. However, none of the tech giants allow free speech on their platforms. All of them have very strict policies that set out rules for what is permissable speech. These are in fact, rules for policing speech. I am an ardent advocate of free speech. I would much rather all ideas (both good and bad) were transparent, and out in the open in the marketplace of ideas. Not all ideas or ideologies are equal, and the best way to counter the bad ideas is to subject them to public criticism and ridicule. I think the US First Amendment protecting free speech is one the greatest inventions of our time. Something I would dearly love to see protected in the UK (where I live).
The problem with defining hate speech and / or offensive speech, is that hate and offence are very subjective terms. And who gets to decide what is hateful / offensive? What one person may find offensive, another person may not. To my mind at least, the best way to counter this is to let all speech be accepted (apart from speech that directly advocates violence). Then allow people to exercise their free speech to criticise and ridicule that idea or ideology. Protecting certain ideas whilst allowing criticism of others is both prejudicial and counter to free speech, not to mention utterly hypocritical. But this is exactly where all socal media platforms are right now. The worst offender for this is surely Twitter.
Enter Gab. Gab is a social media platform not too dis-similar to Twitter. It hit the headlines recently when it came to light that the Pittsburgh shooter had vented many of his extreme views on the platform before going on his shooting rampage[^] at a synogogue killing 11 people. Gab came into a lot of controversy over the events. The entire tech industry promptly rounded on Gab. The hosting providers (including Microsoft), their app was de-platformed by both Google and Apple, payment processor Paypal and the list goes on. Gab advocates free speech (and is the only social media platform that does), but it certainly does NOT advocate violence. It's creator Andrew Torba is very clear on this. I suspect that many of the tech giants were simply looking for a reason to de-platform Gab, and the shootings played right into their hands. It is worth noting that the shooter also had accounts on Facebook and Twitter too. Having a competitor that advocated free speech (when they don't) was always going to end in a retaliatory strike from the elites at Silicon Valley. In my opinion, the (over) reaction from the Silicon Valley tech giants was unfair, unjust and completely unfounded.
There's a famous phrase that states "If you're not the one paying for the service, then you're not the customer". And this phrase could almost be Facebook's mission statement. What started as an ambitious social media platform with some great features and concepts, has over the years transformed into little more than a marketing tool for businesses to sell us their products and services. It's impossible to scroll through your timeline without being bombarded with ads. Many of these ads it is worth noting come directly from your Google searches. It was reported in early 2018 that the big data company Cambridge Analytica had harvested the personal data of millions of Facebook profiles without their consent[^] and used the data for political purposes. The scandal eventually led to Facebook founder Mark Zuckerberg appearing before the United States Congress to testify. However, as this was a voluntary agreement on his part, many simply dismissed the hearing as a dog and pony trick which was never going to trigger any criminal proceedings. Are social media giants held to different standards than everyone else? I wonder what the outcome would have been had the scandal involved a tobacco company for example. It's easy to see how hitting on a tobacco company could generate much kudos and back patting.
In a recent survey it was found that a majority of Americans don’t think social networks are good for the world[^]
Quote: the number of people who think social media is a net positive for society is down to 40 percent. This is not entirely unexpected. Many people are beginning to now see how much power these tech giants wield, and how much influence they hold. Not just politically, but socially and culturally. They dominate our landscape and every part of our lives. I recognise and appreciate the technical advances made by the tech giants, but I have genuine concerns that they are now over stepping their boundaries of responsibility. We are slowly and inexorablly sleep walking into a dystopian, Orwellian world where we are under constant surveillance. Where our personal data renders us to mere commodity. Where we are told what to think and what to say. Where the social, cultural and political norms are dictated to us. Free thought and free expression are being eroded by the tech industry. They promulgate their own political narratives, and destroy all those they disagree with. They don't take kindly to any form of competition, and will beat into submission anyone that dares to create a competitive technology. Is this really where we want to be? Technology naturally has a part to play in shaping our social and cultural fabric, but that should not include dictating it by force. We are giving far too much power and influence to the Silicon Valley elites. It is high time we put ourselves back in charge.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
We are currently in the middle of re-building our existing mobile app. Probably the most important form in the app is the Vehicle Inspection form. This form allows a driver to fill out a vehicle inspection from their mobile device and to submit the results. In our current app (which is an Apache Cordova hybrid app developed using Javascript in conjunction with Kendo UI controls) we generate an HTML page from the inspection metadata. This allows us to use all the HTML controls such as
- textboxes
- checkboxes
- dates
- radiobuttons
- dropdowns
We then capture the driver's responses using Javascript, and submit these responses to our backend system.
Our current mobile app however is being developed using Xamarin Forms. All of our form controls use Telerik UI controls. We knew we wanted to replicate as closely as possible the implementation of the current app. The vehicle inspection is a critical piece of functionality, and it works extremely well. The challenge therefore would be to try to find something that replicated this same impementation in Xamarin Forms.
Whilst investigating how we would reproduce this I came across the WebView. This is a view for displaying HTML content inside the app. Unlike the OpenUri() method wich navigates the user to a web page using the app's in-built browser, the WebView displays HTML content "inside" the app. This sounded like what I needed.
Generating the HTML to render the vehicle inspection was the easy part. I had this working quite quickly. Using the same logic for creating the HTML controls in our existing app (which uses Javascript) I was able to mimic this using C# to achieve exactly the same output in the current app. The problem came when I wanted to sumbit my responses. I looked at the simple example on the Microsoft documentation, but this didn't provide nearly enough clarity of how to proceed. I tried injecting Javascript functions into the generated HTML but this only seemed to work for functions that didn't interact with the DOM. However, to retrieve the responses required interaction with the DOM.
There doesn't seem to be much information anywhee on this particular topic. I looked through the usual suspects (Stackoverflow, Xamarin forums) but to no avail.
I then stumbled across an article that went into a lot more detail on how to Use Javascript with a WebView[^]. Reading through this and looking at the example code gave me sufficient knowledge to work out how to retrieve the responses from the HTML generated vehicle inspection.
Here are the functions I wrote that enable me to retrieve the responses.
private async Task<string> GetValueFromTextbox(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById('{controlId}').value;");
}
private async Task<string> GetValueFromCheckbox(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById('{controlId}').checked;");
}
private async Task<string> GetValueFromRadioButton(string controlname)
{
return await WebView.EvaluateJavaScriptAsync($"document.querySelector(\'input[name=\"{controlname}\"]:checked\').value;");
}
private async Task<string> GetValueFromDropdownn(string controlId)
{
return await WebView.EvaluateJavaScriptAsync($"document.getElementById(\'{controlId}\').options[document.getElementById(\'{controlId}\').selectedIndex].value;");
} I have now got this working and am able to submit the responses that have been entered into the HTML generated vehicle inspection.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This article assumes that the reader is already familair with the MVVM software design pattern. If you are not familair with this design pattern, then it's worth reading up on that first, before proceeding with this article. There are many descriptions of this article, including this one[^]. It is useful to understand the design pattern from a purely conceptual perspective, before looking at the various technical impementations of it. By understanding the design pattern at a conceptual level, you will far easier comprehend its implementation details.
I have used the MVVM design pattern previously. In fact, I have used the MVVM pattern within our current mobile app. For this, I used Kendo UI controls in conjunction with javascript. This particular implementation uses what is known as an observable. An observable (which is based on the Observer design pattern[^] is an object that maintains a list of dependents (called observers) and notifies them of any changes in state. It is this notification system that provides the two-way notification (or binding) that is essential to the MVVM design pattern.
With our latest incarnation of the mobile app now well underway, we have come to the point where we can start building our data entry forms. I have so far implemented the underpinning infrastructure and architecture which enables the app to consume our services, save data to local storage using SQL Lite and send emails from the app. All of this is now fully implemented and working.
We have several data entry forms within our app that allow the user to submit data to our backend services. These include forms for submitting:
- mileages
- service, repair and MOT bookings
- vehicle inspections
As we have already done so in our previous mobile app, we will be using the MVVM design pattern to implement these data entry forms.
We will impement the data entry forms using XAML and Telerik controls. We could have used the native Xamarin UI controls, but there is a greater selection of Telerik controls, and they provide a consistent API and are easily themable. Although the implementation uses Telerik controls and XAML, the underlying concepts can be applied with any UI technology.
I'll use an example that refers to a simple data entry form that allows a user to enter a message which is sent to the backend service. The message may be to request information for example. This trivial example containing just the one UI control should suffice to demonstrate how the MVVM pattern can be implemented.
I tend to begin the development of a new data entry form from the Model and work backwards from there i.e. Model -> ViewModel -> View.
All Models inherit from the the same base Model class. This base Model class inherits from NotifyPropertyChangedBase which is a Telerik class that supports behaviour similar to INotifyPropertyChanged.
public class BaseFormModel : NotifyPropertyChangedBase
{
} This ensures that all Models used by the data entry forms will support the ability to raise events when a property on the Model changes. These changes to the Model will be notified to the ViewModel.
Models used by the data entry forms also inherit from the following interface.
public interface IFormData<T>
{
T CreateDefaultModel();
} By implementing this interface, the Model must therefore contain the method CreateDefaultModel(). This method is used by the ViewModel to supply a default Model (containing default values) which can be used when the View (the XAML form) is first displayed to the user. It implements generics which therefore allows it to work with any type of Model.
Here's the Model for the "Message Us" data entry form. For the purposes of this simple example I have removed much of the code for clarity.
public class MessageUsModel : BaseFormModel, IFormData<MessageUsModel>
{
private string _messageToSend;
[DisplayOptions(Header = MessageUsModelConstants.MessageHeader)]
[NonEmptyValidator(MessageUsModelConstants.MessageError)]
public string MessageToSend
{
get => _messageToSend;
set
{
if (_messageToSend == value) return;
_messageToSend = value;
OnPropertyChanged();
}
}
public MessageUsModel CreateDefaultModel()
{
return new MessageUsModel
{
_messageToSend = ""
};
}
} The decorations on the public property MessageToSend are Telerik specific and define the validation rules / messages for the property. These rules / messages are then enforced by the View. Using this particular implementation of MVVM, the data rules are therefore defined at the level of the Model (which makes sense). Whenever a new value is set on the MessageToSend property, the OnPropertyChanged() event is raised. This updates the state of the ViewModel that is bound to the Model.
Moving onto the ViewModel, we define the base behaviour for all our ViewModels in our base class.
public abstract class ViewModelBase<T> : NotifyPropertyChangedBase where T : new()
{
public T FormModel = new T();
public abstract Task PostCompleteTask();
} I have used an abstract class that inherits from the same Telerik class as the base Model class i.e. NotifyPropertyChangedBase. The public property FormModel is a reference to the Model. This property is used by the ViewModel when it needs to refer to the Model. The method PostCompleteTask() is invoked by the ViewModel when the form is ready to be submitted. As this is an abstract method, it must therefore be implemented by each inheriting subclass. This provides consistency to all of our ViewModels. The actual work performed by each ViewModel will always be defined within this method.
Here's the ViewModel for the "Message Us" class. For the purposes of this simple example I have removed much of the code for clarity.
public class MessageUsViewModel : ViewModelBase<MessageUsModel>
{
public MessageUsModel MessageUsModel;
public MessageUsViewModel()
{
this.MessageUsModel = this.FormModel.CreateDefaultModel();
}
public override async Task PostCompleteTask()
{
}
} The public property MessageUsModel is the reference to our Model. This is initially populated with a default instance in the class constructor by invoking the method CreateDefaultModel() (which we saw earlier) using the public property FormModel (which we also saw earlier).
this.MessageUsModel = this.FormModel.CreateDefaultModel(); When the user has finished entering their message and is ready to submit the form, clicking on the form's submit button will invoke the PostCompleteTask() method that will perform whatever processing as necessary (in our case all form data is submitted to our backend services using RESTful Web API services).
Finally, here's the XAML for the View and the code-behind.
[XamlCompilation(XamlCompilationOptions.Compile)]
public partial class MessageUsView : ContentPage
{
public MessageUsViewModel Muvm;
public MessageUsView()
{
InitializeComponent();
this.Muvm = new MessageUsViewModel();
this.BindingContext = this.Muvm;
}
private async void DataFormValidationCompleted(object sender, FormValidationCompletedEventArgs e)
{
dataForm.FormValidationCompleted -= this.DataFormValidationCompleted;
if (e.IsValid)
{
await this.Muvm.PostCompleteTask();
}
}
private void CommitButtonClicked(object sender, EventArgs e)
{
dataForm.FormValidationCompleted += this.DataFormValidationCompleted;
dataForm.CommitAll();
}
} And the XAML code.
<input:RadDataForm x:Name="dataForm" CommitMode="Immediate" />
<input:RadButton x:Name="CommitButton" Text="Save" Clicked="CommitButtonClicked" IsEnabled="True"/> The important parts to note are the setting up of the binding between the View and the ViewModel in the constructor. This sets up the two-way binding, such that any changes in the View are reflected in the ViewModel and vice-versa. These changes are also reflected in the underlying Model (if that wasn't already clear).
this.Muvm = new MessageUsViewModel();
this.BindingContext = this.Muvm; When the user clicks the Submit button, the actions implemented within the ViewModel's PostCompleteTask() method are invoked.
This is a fairly simple example. In a real world use case there would undoubtedly be more complexity, but this should serve as a useful example of using the MVVM design pattern within a Xamarin mobile app. The fact that we are using Telerik UI controls doesn't change the core concepts discussed. The MVVM design pattern is a very powerful design pattern that is perfect for use within data entry forms.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had a need to consume a private nuget feed in one of our Azure DevOps build pipelines. This was for our Xamarin Forms mobile app's build pipeline. We wanted to use a Telerik UI nuget package in our app. In order to add a reference to this nuget package to your project, you firstly need to add your Telerik credentials into Visual Studio. This ensures that you are a fully paid up Telerik subscriber with access to the nuget package.
I needed to update the build pipeline therefore to fetch this private nuget package. After a bit of trial and error (and a few failed builds) I got this working. In Azure DevOps I needed to update the nuget restore build task to also fetch the Telerik nuget package.
- Add a Nuget restore task to your build pipeline (if you don't already have one). This task needs to come before you build the project.
- Set the path to the project in the relevant textbox
- Set the option for Feeds in my Nuget.config (this is important as this allows you to specify credentials for consuming external nuget packages)
You should now see a Manage link which will allow you to configure the credentials to your private nuget package. Clicking on this link opens up the Service Connections that are available for your build pipeline. Add a new service connection of type Nuget. In the dialog box that is now displayed click the option for Basic Authentication and enter the following information.
- Connection name
- Feed URL
- Username
- Password
Click OK to save these credentials.
Back in your build pipeline's nuget restore task, you should now be able to select these credentials in the dropdown. What Azure DevOps will now do, is merge these credentials into it's default nuget.config file (or into the one you have specified under the Path to Nuget.config). Either way, whatever credentials you have specified will be merged into the nuget.config file.
And that's basically all there is to it. Your build pipeline is now able to consume nuget packages from private feeds.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|