|
The solution to this particular problem was not as difficult as I at first thought. Our web app contains dynamic toolbars that are in fact Razor form page handlers. Inside each form page handler is a button that invokes the Razor page handler and thus services the request.
For example, a toolbar may contain buttons for adding, deleting, updating a particular entity.
<form asp-page-handler=@toolbar.PageHandler method=@toolbar.Method>
<button class="btn btn-default">@toolbar.DisplayText</button>
</form> As can be seen from the example code, the toolbar is built entirely from dynamic data. This is because each toolbar menu item is tied to a permission, and the ASP.NET Web API service that returns the toolbar items contains all the business logic for deciding which toolbar items to return based on the user's permissions.
This all works perfectly. However, I ran into a problem when I needed to add a confirmation dialog to the Delete toolbar menu item. Adding a confirmation dialog using JQuery was simple enough, the problem was that all toolbar items are linked to a Razor page handler. For example, the toolbar menu item for Edit was linked to the OnPostEdit page handler. This would then implement any code as necessary to service the user's request to Edit the entity. None of the these toolbar items required a confirmation dialog. The toolbars are defined entirely by the service and all are linked to a single Razor page handler.
The Delete toolbar item needed a confirmation ("Are you sure you want to delete this item?"), and the most obvious solution was to implement this using JQuery, but this would break the pattern I was using for the other toolbar items.
I eventually came across a simple solution that solved the problem. I could add an onclick to the toolbar items that needed a confirmation dialog.
<form asp-page-handler=@toolbar.PageHandler method=@toolbar.Method onclick="return confirm('Are you sure you want to delete this?')">
<button class="btn btn-default">@toolbar.DisplayText</button>
</form> Clicking on the toolbar item now brings up a confirmation dialog. If the user selects the Yes option the Razor page handler is still invoked - which is exactly what I wanted. If the user selects No, then nothing happens.
This very simple addition to the Razor form page handler gives you a confirmation before the page handler code is invoked. No need for JQuery. Just a very simple solution that I have now implemented and which works perfectly.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of the web app I'm currently developing using ASP.NET Core 2.0, I needed to allow the user to upload files to the server. The file upload control I am using is the Kendo UI Upload control, but the underlying process will be similar irrespective of the underlying UI control that you are using.
To achieve this I am using an ASP.NET Core page handler. Within this page handler I have placed the Kendo UI Upload control. The page handler contains the name of the page handler and the HTTP method to use.
The below code is the Razor (.cshtml) syntax (simplified) that demonstrates how I have created the page handler and the Kendo UI Upload control.
<form asp-page-handler="upload" method="post">
@(Html.Kendo().Upload()
.Name("files")
)
<button>Save</button>
</form> In the example above it should be noted that the name of the page handler is "upload" and the HTTP method will be a POST. Without going into a full description of ASP.NET Core page handlers, the name of the form page handler and the HTTP method dictate the name of the page handler on the Razor code-behind.
When the user clicks the Save button the files that have been specified for upload will be posted to the ASP.NET Core "upload" page handler.
public void OnPostUpload(IEnumerable<IFormFile> files)
{
} Note: the name of the Kendo UI Upload control MUST be the same as the name of the parameter received by the page handler i.e. "files" in this example. The files get posted to the page handler when the form is submitted. You can then process the files in any way you want. Note also that the type of parameter is different to previous versions of ASP.NET which used HttpPostedFileBase. With ASP.NET Core the posted files are of type IEnumerable<iformfile>.
ASP.NET Core makes handling file uploads very simple and straight forward. Doing so is even easier using the Kendo UI Upload control which reduced the amount of code I had to write. Files can be uploaded asynchronously and in multiples too.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have been using Razor page handlers recently in our web application to handle the interaction between client-side code and server-side code. There have previously been several ways of achieving this using AJAX. And this is still possible using ASP.NET Core 2.0 as I will demonstrate later in this article. There is also a TagHelper that allows you to invoke a page handler from within your Razor page. I will demonstrate both of these within this article.
The purpose of this article is not to give a detailed, step-by-step account of both of these methods. Instead, I will give an overview of them so you can see how they both work, then investigate them further if you want to implement them in your own application.
By way of introduction, a page handler is a page method that responds to a particular HTTP method e.g. Get, Post, Delete, Put etc. Page handler names are preceded by On. So the default page handlers include OnGet, OnPost etc. The name of the handler is appended to the default page handler name e.g. OnGetCustomer would be a page handler that is invoked to retrieve a specific customer. OnPostOrder would be used to post an order.
It is definitely worth looking at page handlers in more depth. I will assume the reader is familiar with page handlers. If this is not the case, please read up on them first. Okay, with that out of the way, let's dive into the detail of the article and describe how Razor page handlers can be used within an application.
I will start by firstly looking at the ASP.NET Core TagHelper method. This is the most straight-forward. This allows you to bind a client-side event such as a button-click to a server-side page handler. Here is a very simple page handler that you would define in the .cshtml page.
<form asp-page-handler="Customers" method="GET">
<button class="btn btn-default">List Customers</button>
</form> In the example above I have created a simple Razor page handler that fetches a list of customers. The name of the page handler given in the TagHelper syntax would therefore be OnGetCustomers. Here is the definition of the page handler in the .cshtml.cs file.
public void OnGetCustomers()
{
} ASP.NET Core 2.0 allows you to add multiple page handlers to the same page. You could therefore add page handlers for adding, editing, deleting and viewing customers all on the same page.
You can also pass parameters to page handlers. So if you wanted to fetch a specific customer you could achieve this using the following code example.
<form asp-page-handler="Customer" method="GET">
<button class="btn btn-default">Get Customer</button>
<input id="handler_parameter" type="hidden" name="selectedCustomerID" value="0"/>
</form>
public void OnGetCustomer(int selectedCustomerID)
{
} Obviously you would need to set the value of the input control to some meaningful value. In my particular case, I am setting the value when the click event of a Kendo UI TreeView is raised. In the event click for the Kendo control I am using JQuery to set the value to the ID of the currently selected item in the TreeView. Then when the user wants to perform an action on the item (edit, delete, view etc), the ID is passed to the page handler.
Here is the Keno UI TreeView click event when an item is selected.
function onDocumentViewNode(data) {
$("#handler_parameter")[0].value = data.id;
} It is worth noting that the name of the input control must be the same as the name of the parameter on the page handler. In the above example they are set to "selectedCustomerID". If they do not match, then nothing is passed to the page handler.
Another way to use Razor page handlers is by using AJAX. With AJAX you are able to invoke requests using GET, POST etc. These can be RESTful requests for example. With ASP.NET Core 2.0 they can also be Razor page handlers.
Here is a simple example of invoking a Razor page handler using AJAX.
$.ajax({
type: "GET",
url: "/Customer?handler=customer&selectedCustomerID="+ data.id,
contentType: "application/json",
dataType: "json",
success: function (response) {
},
failure: function (response) {
console.log(JSON.stringify(response));
}
}); Razor page handlers open up massive opportunities for creating highly flexible applications. The interaction between the client-code and server-code is baked into the very fabric of the .NET Core Framework. To achieve such seamless interaction previously would have involved writing a lot of custom code, much of it probably spagetti-like or very clunky. With ASP.NET Core, interaction between the client and server is now totally seamless and extremely easy to achieve. Razor page handlers are highly flexible (you can respond to any HTTP verb) and very performant. They also lead to more cleaner code (there is far less code to write), and can be unit-tested (by separatng out the code into further layers of separation). Using them is really a no-brainer.
I use both of these methods in my web applications, and they allow me to write very flexible code. I have recently used both of them to interact with Kendo UI controls which give the application a much higher degree of responsiveness and flexibility.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Within our web application, we are using the HttpContext.Session object to store certain items of information. Although we make minimal use of this object (it is after all, global data), there are times when it just makes sense to store certain kinds of information in the session. For example, when the user logs into the application, we grab their email address and store this in the session. This obviously won't change and so is a prime candidate for storing within the session.
All of our services require that the user's email address is passed as parameter so we can determine the user making the request. Our functions therefore retrieve the email address from session storage. This works great, but led to a problem when I came to try unit testing the functions as I was unable to access session storage from my unit tests. After a bit of trial and error I came up with the following solution. Googling the problem revealed that there are several ways in which this can be achieved. I didn't want to use a mocking framework, as I wanted to keep the unit tests as small and simple as possible. Although adding a mocking framework would have given me a lot more functionality to play with, I was only interested in mocking the HttpContext.Session object. The solution I have used is a vanilla approach that doesn't use any external frameworks, thus making it possible for anyone to implement it.
First off, I created a class that implemented the ISession interface. This is the same interface that the HttpContext.Session object implements.
public class MockHttpSession : ISession
{
readonly Dictionary<string, object> _sessionStorage = new Dictionary<string, object>();
string ISession.Id => throw new NotImplementedException();
bool ISession.IsAvailable => throw new NotImplementedException();
IEnumerable<string> ISession.Keys => _sessionStorage.Keys;
void ISession.Clear()
{
_sessionStorage.Clear();
}
Task ISession.CommitAsync(CancellationToken cancellationToken)
{
throw new NotImplementedException();
}
Task ISession.LoadAsync(CancellationToken cancellationToken)
{
throw new NotImplementedException();
}
void ISession.Remove(string key)
{
_sessionStorage.Remove(key);
}
void ISession.Set(string key, byte[] value)
{
_sessionStorage[key] = Encoding.UTF8.GetString(value);
}
bool ISession.TryGetValue(string key, out byte[] value)
{
if (_sessionStorage[key] != null)
{
value = Encoding.ASCII.GetBytes(_sessionStorage[key].ToString());
return true;
}
value = null;
return false;
}
} My functions have an optional parameter which takes an instance of an ISession object. If one is not passed as an argument then the function simply uses HttpSession.Session instead.
public void MyFunction(ISession context = null)
{
if (context == null)
{
context = HttpContext.Session;
}
string email = context.Get("UserEmail");
} Then in my unit tests I create an instance of the mock Httpcontext.Session class from above and pass this as an argument to the functions I wish to unit test. In the example below I am unit testing a ViewComponent that retrieves the user's email from the HttpContext.Session object.
[TestMethod]
public async Task InvokeAsyncTest()
{
MainMenuViewComponent component = new MainMenuViewComponent();
var mockContext = MockHttpContext();
var result = await component.InvokeAsync(mockContext);
Assert.IsNotNull(result);
}
private static ISession MockHttpContext()
{
MockHttpSession httpcontext = new MockHttpSession();
httpcontext.Set<string>("UserEmail", "unittest@mycompany.com");
return httpcontext;
} I have now implemented this same pattern for all my ViewComponent's, and it works absolutely perfectly. I can easily and simply unit test any code that makes use of the HttpContext.Session object without a problem.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
While developing our new web application, we wanted to take a component-based approach and build up the user-interface from small, discreet UI components. So instead of having monolithic Razor Pages containing many different controls, we thought it would be a far better design approach to develop the UI from smaller, discreet components that we could then re-use in other parts of the application.
I initially looked into the concept of partials in ASP.NET Core, and while these are great for re-using static markup, they're not so great for building dynamic, data-driven content such as menus (which would be the first component I would in fact be developing).
Where your requirement is to re-use dynamic and / or data driven content, then the correct design approach is to use a ViewComponent. From the Microsoft documentation[^]
Quote: New to ASP.NET Core MVC, view components are similar to partial views, but they're much more powerful. View components don't use model binding, and only depend on the data provided when calling into it. A view component:
- Renders a chunk rather than a whole response.
- Includes the same separation-of-concerns and testability benefits found between a controller and view.
- Can have parameters and business logic.
- Is typically invoked from a layout page.
View components are intended anywhere you have reusable rendering logic that's too complex for a partial view, such as:
- Dynamic navigation menus // bingo!! this is what we're looking for!! I won't copy the entire list here, I've posted the link to the documentation so you can have a read of it for yourself.
So our menu tree structure is handled by a ViewComponent. All the business logic for building a user-specific menu is contained within the ViewComponent, and the ViewComponent returns the menu tree structure. This is then displayed by the Razor Page that is associated with the ViewComponent.
So building our application's menu is encapsulated in a re-usable, discreet and unit-testable ViewComponent. Going forwards, we will use ViewComponent's for all of our UI components, and build up our Razor Pages from multiple ViewComponents.
This gives us huge benefits.
- Encapsulate the underlying business logic for a Razor Page in a separate component
- Allow for the business logic to be unit-tested
- Allow for the UI component to be re-used across different forms
- Leads to cleaner code with separation of concerns
Here's a (very) simplified example of how we've used a ViewComponent to build our menu tree structure. Note that all exception handling, logging etc has been removed for brevity.
public class MenuItemsViewComponent: ViewComponent
{
public async Task<IViewComponentResult> InvokeAsync(int parentId, string email)
{
var response = await new MenuServices().GetModulesItemsForUser(parentId, email);
return View(response);
}
} The ViewComponent calls one of our ASP.NET Web API services to retrieve the menu tree specified menu level and user. It then returns this wrapped inside an instance of IViewComponentResult, which is one of the supported result types returned from a ViewComponent.
Here is the (very) simplified Razor Page that displays the output from the ViewComponent. Note that all styling has been removed for brevity.
@model Common.Models.MainMenuModels
@if (Model != null && Model.MenuItems != null && Model.MenuItems.Any())
{
foreach (var menuitem in Model.MenuItems)
{
<a asp-page=@menuitem.Routing>@menuitem.DisplayText</a>
}
} And finally here's how we invoke the ViewComponent from our layout page.
@await Component.InvokeAsync("MenuItems", 0, "myemail@company.co.uk") I am very impressed with the ViewComponent concept. From a design point-of-view, it is the correct approach if you are building forms that contain any sort of dynamic content. By allowing for clean separation of concerns and supporting unit testing, you can ensure your applications are far more robust and less likely to fail in production. These are just a couple of reasons why you should consider using ViewComponent's in your own ASP.NET Core 2.0 applications. Why not give them a try.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I previously wrote about Azure Application Insights[^] in an article where I talked about how we would be using it within our ASP.NET Web API services. We use it in that context to monitor our services for such things as availability, performance and to record metrics such as the number of requests our services are processing. And Azure Application Insights gives me all of this, and a whole lot more besides.
This time round we were exploring logging engines for our latest ASP.NET Core 2.0 Razor Pages web application. We wanted something that would record the various events that our application would generate, as well as enable us to debug and diagnose errors and / or exceptions as they occurred.
We looked at various logging engines such as ELMAH and Log4Net. Each would have satisfied our requirements, but when we looked into the feature set of Application Insights, there was absolutely no comparison. Application Insights won the contest hands down. Straight out the box it measures practically everything you need without writing a single line of code. On top of that, you can write your own custom events, traces, exception handlers etc.
I've added several custom logging methods for monitoring and measuring our application whilst it's running, as well as exception logging. All of this helps us to debug and diagnose issues, helps with development as we can add traces throughout the application, allows us to monitor and measure the health of the application and to generate exception reports when the application encounters errors.
Within the Azure portal, you can open your Application Insights blade and dice and slice this data any way you want. You can drill top down from your management summary type data (showing broad trends and metrics), right into the nitty gritty detail of an individual request or exception. The data can be filtered in an almost infinite number of ways and presented in multiple formats (or downloaded for offline use). And it's fast. Despite the vast amount of data that is collected, filtering it and querying is surprisingly fast. Even complex queries of the data are returned blazingly fast.
To use Application Insights you firstly need to download the Application Insights package from nuget. Once installed, you can then start using it in your application.
Here's a single example of a trace event I have implemented. I use it to trace the requests to our ASP.NET Web API services. We pass an event name, the duration of the request (so we can monitor for performance) and a dictionary of custom properties (which I use to pass the request arguments).
public class LoggingService
{
private readonly TelemetryClient _telemetryClient = new TelemetryClient();
public void TrackEvent(string eventName, TimeSpan timespan, IDictionary<string, string> properties = null)
{
var telemetry = new EventTelemetry(eventName);
telemetry.Metrics.Add("Elapsed", timespan.TotalMilliseconds);
if (properties != null)
{
foreach (var property in properties)
{
telemetry.Properties.Add(property.Key, property.Value);
}
}
this._telemetryClient.TrackEvent(eventName, properties);
this._telemetryClient.TrackEvent(telemetry);
}
} And here's the method being invoked within the application.
stopwatch.Restart();
try
{
var response = await new MyService().GetMyData(param1, param2);
}
catch (Exception ex)
{
service.TrackException(ex);
throw;
}
finally
{
stopwatch.Stop();
var properties = new Dictionary<string, string>
{
{ "param1", param1 },
{ "param2", param2 }
};
service.TrackEvent("GetMyData", stopwatch.Elapsed, properties);
} You can add traces for events, exceptions, diagnostics etc. All of this data is recorded and available for you to filter, dice and slice in any way you need it.
If you're looking for a logging engine in your application, then you need to check out Application Insights. It does everything we need and a whole bunch more, and is highly configurable and fast. The question is not why you should use it, but why you shouldn't use it!
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In a previous article I demonstrated how to write flexible code[^] for n-tier designed applications. In this article, I want to describe how I approached designing my code for our ASP.NET Core 2.0 Razor Pages application. My key goal was to separate out the various concerns, and in particular keep the UI code separate from the business logic code.
We are using Razor Pages in our current app, and all the business logic is encapsulated within our ASP.NET Web API services which are invoked by the Razor Pages. A Razor Page is backed by a PageModel class which supplies much of the "plumbing" logic behind the Razor Page. For example, the PageModel class contains such things as the Response, the Request, ViewData, PageContext, HttpContext. But no business logic. So this article will describe how I have approached surfacing business logic within my Razor Pages.
It is worth noting that I have deliberately used a very simple example for clarity and to keep the article nice and simple.
The first thing I did was to create a base PageModel class for the Razor Pages. As stated earlier, all Razor Pages are backed by a PageModel class as in the following code.
public class IndexModel : PageModel
{
} So I created a base PageModel class that I would use instead. My base PageModel class inherits from the default AspNetCore.Mvc.RazorPages but adds the ability to specify a completely different class which will contain all the business logic.
public class PageModelBase<T>: PageModel where T : PageModelService, new()
{
public readonly T Service = new T();
} I wanted consistency in my design, so I created a base service class that would be instantiated by the PageModel classes. I have called this class PageModelService class. In the example code above, I am creating an instance of this backing service class in my PageModel. The PageModelService class is where I will place all my business logic code (which in my case are my ASP.NET Web API services). This separation ensures that the business logic code is separated out from the UI code, and is therefore also unit-testable.
Here's my PageModelService class definition.
public abstract class PageModelService
{
protected abstract string ModuleName { get; }
} I only have one property defined (the name of the module to which the Razor Page belongs), but you can define as many properties, methods etc as your applications needs. Remember, this is the base PageModelService class, so only place code here that is applicable to all your Razor Pages.
Here's an example class definition for a PageModelService that sets the ModuleName property in the constructor.
public class ExamplePageModelService : PageModelService
{
public ExamplePageModelService ()
{
ModuleName = "Example Module;
}
protected override string ModuleName { get; }
} Finally, here's a Razor Page using the new PageModelBase class and setting the name of the backing service (which will be instantiated by the class).
public class ExamplePageModel : PageModelBase<ExamplePageModelService>
{
public OnPost()
{
}
public void OnGet()
{
}
} This very simple design pattern allows us to separate the UI code from the business logic code within the Razor Pages and allows the business logic code to be unit tested also. So if you're building Razor Pages and want to keep your UI code separate from your business logic code, then give this design pattern a try. You are encouraged to amend the code to suit your own specific requirements, but feel free to use the code as a starting point.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of all my builds I provide code coverage. After executing the unit tests, I then perform code coverage over the source code. I won't go into the reasons why this is necessary (I've written about code coverage in previous articles and almost certainly will again), but for me it's a vital part of every build. Without code coverage you have no idea how effective your unit testing strategy is.
I use a tool called dotcover to provide code coverage. This is a utility from JetBrains (the same people who create Resharper) and is a genuinely brilliant tool. It integrates into Visual Studio and can be run from the command-line as part of a script (which is exactly how I use it in our builds).
Unfortunately I couldn't get it working with our ASP.NET Core 2.0 application. The documentation states that it should work with .NET Core but after many attempts I just couldn't get it working. There is a command-line utility called dotnet that ships with .NET Core that can be used to perform all manner of functions, such as creating projects, testing projects, adding references to projects etc. I was already using dotnet to execute my unit tests, and thought that I'd try using it for my code coverage also.
As it happens there is an open-source utility that integrates directly with dotnet that performs code coverage. It's called coverlet and is available on Github here[^]. There are examples on using the utility here[^]
One issue that I came across was executing this from Team Foundation Services (TFS). Despite working without error from the command-line on the build server, I was getting an error when executing it as part of the build from TFS. I managed to eventually resolve this by explicitly specifying the name of the project and suppressing dotnet test from performing a build and restore.
dotnet test "MyProject.Tests.csproj" --no-build --no-restore /p:CollectCoverage=true This generates a JSON file containing the generated code coverage. Whilst this is not as pretty as the output from dotcover (which is simply exceptional) it at least works and gives me code coverage for our .NET Core 2.0 project.
Another problem encountered, another problem solved
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Code coverage is great and all, but we don't get too hung up on that in a fast past, ever changing environment, with frequent releases.
I guess what I am saying is that really good unit tests and code coverage is desirable, but in my experience, rarely obtained.
|
|
|
|
|
In my projects, unit testing and code coverage are always obtained
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As I have mentioned in previous articles, I am using Team Foundation Services 2015 (TFS2015) to build our apps, and our latest ASP.NET Core 2.0 web app is no different. I've already run into the issue of versioning the app from the build process (which I have covered here[^]).
The next problem I ran into was getting the projects to build. By default TFS2015 will use the latest version of MSBUILD unless you specify a different version. For different, read earlier version i.e. VS2013, VS2012. To enable TFS2015 to build the project you need to specify the exact location of the version of MSBUILD you need to use. Thanksfully, TFS2015 gives you this option (under the Advanced tab on the MSBUILD task).
Before you can do this though, you need to install the Visual Studio 2017 SDK tools and APIs which will then install the required version of MSBUILD (which at the time of writing is version 15).
The next problem I ran into was executing the unit tests within our TFS2015 pipeline. The TFS2015 Visual Studio Test task wasn't producing the required test output. I tried several tweaks and variations but none of them worked. After some reading around and looking at posts on Stackoverflow it was suggested that using the dotnet command-line tool would allow me to execute our unit tests. After some playing around with the various settings I eventually managed to get this working and finally able to publish our test results to the TFS2015 dashboard.
The biggest problem I have had so far is that I have really struggled to find solutions to the problems I have faced. This is due largely because .NET Core 2.0 is still (relatively) new. As the uptake increases I am sure many of these issues and problems will become better known and be addressed or have workarounds provided. That said, I've learned a great deal about how .NET Core 2.0 works under the covers because I've had to dig deep to find these solutions.
.NET Core 2.0 is a different beast to standard .NET in many ways, and each day I find one (or more) of these differences. It's a pleasure to work with though, and is a genuinely fantastic environment in which to develop applications. A happy developer is a productive one
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This is defiitely something that caught me out. We are using Azure Active Directory Business-2-Consumer (AD B2C) in our latest web app for all user identity including signup / signin / password reset. After configuring and setting up the required policies (specifying what information we wanted returned in the token upon success), I then set about trying to retrieve the JWT token that is returned from Azure AD B2C so that I would know the identity of the logged-in user..
Retrieving this token proved a bit more difficult than I originally thought. I checked the response headers and couldn't find the token. I checked through the documentation and couldn't find any examples or explanation of how to retrieve the token.
Using the browser's built-in debugging tools and Telerik Fiddler, I could see that the token was being posted to the /signin-oidc endpoint (which is the default endpoint for OpenId Connect applications).
I did eventually come across this article[^] which seemed a likely candidate. Unfortunately, when attempting to follow the instructions I got an error when running the application. Our configuration didn't seem to work with the example code given in the article.
Eventually, I managed to come across this article[^] The important part of the article is the code snippet below.
@{
ViewData["Title"] = "Security";
}
<h2>Secure</h2>
<dl>
@foreach (var claim in User.Claims)
{
<dt>@claim.Type</dt>
<dd>@claim.Value</dd>
}
</dl> Basically, the returned claims from Azure AD B2C are contained within the user object Claims property.
User.Claims By iterating through this object I was able to retrieve all the claims that I had configured in our Azure policies.
I don't know why this critical piece of the jigsaw is so sparsely documented. Without knowing which user has logged into our web app, we are pretty much at a loss as to provide any functionality. Being able to determine the identity of the user is the critical functionality provided by the Identity Provider (any identity provider).
I hope this article helps out at least a few other developers.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When creating our ASP.NET Core 2.0 application, one of the first tasks I had was to create a build for the application. The build will mature and grow over time and acquire additional tasks (such as unit testing steps, deployments to our Azure web hosting etc). But for the time being, I was only creating the most basic of builds for the application to perform Continuous Integration and to deploy to a testing endpoint.
The first problem I encountered was how to version the application using our Team Foundation Services (TFS) build process. Versioning in .NET Core 2.0 does not work the same way as it does in earlier versions of .NET. So I couldn't just take my previous Powershell script (which I use for versioning my other .NET applications) as it didn't work.
After reading through lots of documentation and StackOverflow posts, I came across a solution that works and which I have now impemented within the build.
Here's a link to a utility called dotnet-setversion[^] that will version your .NET Core 2.0 application. After adding the reference to your project, you simply invoke the utility and pass it the version number as a parameter. I achieved this within our build process by adding a new step which invokes a Windows batch file. This batch file invokes the utility which then versions our application.
Within TFS you have the ability to pass arguments to your Windows batch files. I am passing the build version number $(Build.BuildNumber) as the argument.
I then invoke my Windows batch file (called setversion.bat)
@echo off
cls
ECHO Setting version number to %1
cd <projectFolder>
dotnet restore
dotnet setversion %1 This all works perfectly, and the deployed application assemblies are stamped with the correct version number.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Since my last article I've spent some considerable time researching various technologies with a view to deciding which ones the development team will use for the development of the next version of the fleet management web application. The current fleet management web application is somewhat past its sell by date and in need of some fresh blood in the form of shiny new technology.
Many of the technologies we intend to use are already in place.
- We want to build a suite of RESTful services using ASP.NET Web API so that we can reuse the services in other applications such as mobile applications. These will be written using C#.
- We will use an Azure Service Bus for all updates where applicable and necessary. We will decide which ones fit the criteria on a case-by-case basis.
- Images used by the Document Manager (think of a fleet management version of Dropbox) will use Azure Blob Storage. There will be a blob container for each client who uses the web application so as to keep their documents completely separate from each other.
- All relational data will use Azure SQL storage.
- Authentication processes (signup / signin / forgot password) will use Azure Active Directory Business-2-Consumer (or Azure AD B2C for short). This is a new technology we haven't used before. Rather than implement our own authentication code (which has nothing to do with fleet management) we will use the Azure offering instead. This gives us huge security benefits. Our web app will be secured by the vastly superior Azure infrastructure which already authenticates billions of logins on a daily basis as part of Office365. It also comes with analytics to monitor unusual login activity such as login attempts during unusual parts of the day. These rules can be fine tuned via AI to provide ever more powerful security by learning what constitutes an invalid login. We get full replication of our users across multiple geolocations for failover and business continuity. This scale of security gives us peace of mind that our application will be in safe hands. To develop this in-house would be astronomically expensive, yet is highly affordable using Azure.
The missing piece of the jigsaw was what technology we were going to use to build out the front-end. We looked at React, and in particular Angular. We spent some considerable time looking at Angular in fact. A huge variety of different types of application have been built using it, including data-entry web apps similar to our own. In the end it didn't really fit our current technology stack, and I wasn't particularly impressed with the clustermess that is npm. For all of its imperfections and foibles, I think nuget is the better package manager, and versioning the various pieces of the application under source control (Team Foundation Services 2015) would fall into my area of responsibility.
I suggested building the front-end using Razor. I had already developed a full templating engine using Razor for the mobile app, so I was familiar with using Razor syntax. When we looked into the current .NET stacks available, we weighed up the pros and cons of whether to use ASP.NET Core 2.0. We had read some very positive reviews and watched plenty of webinars that sold us on using ASP.NET Core 2.0. It was Microsoft's latest, bleeding edge framework, and contained some fantastic new features.
One of these new features was the RazorPages project. This is a new project template that comes with ASP.NET Core 2.0 and allows for the development of building front-end applications without having to reply on the scaffolding of MVC. RazorPages are controller-less Razor pages and are very flexible and lightweight and exactly what we were looking for.
I'm excited by the technology stack that we are using. Some of the decisions are old friends that we have used before (ASP.NET Web API, Azure SQL and Blob storage for example), and some are friends in waiting (Azure AD B2C and ASP.NET Core 2.0 for example). I'll write more articles about the various technologies as I get my hands dirty with them. At this poont in time however, I can honestly say that we have chosen best-of breed, industry best, (b)leading edge technologies for our next generation of applications. I'm genuinely excited to be workig as a software developer right now
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of a previous project I had to implement a login service for a mobile app. I wrote the service using ASP.NET Web API in C#. The service returned a user JSON object if the login was successful. All well and good. As part of my current project, I have a need to write another login service for a completely different application. This time I need to write a login service for a web application.
Looking at the requirements of the new web app login, it occurred to me that the code would be very similar to the mobile app login service. They both needed the ability to locate a user (based on their email address) and check if the email that was entered by the user matched the email held in the User table in the database.
So I began thinking how I could make my login code more flexible and generic so that I could accomplish the same login functionality using different user types (a mobile app user and a web app user). I eventually came up with a solution that uses interfaces and generics, and gives me the flexibility to write multiple login services for any user type.
The following code is based around an n-tier architecture where there is a data abstraction layer that handles all interaction with the back-end database, and a business abstraction layer that enforces the domain logic rules.
Before going any further, for clarity I have deliberately omitted any code relating to authentication, security, permissions etc. When reading through the code, please bear this in mind. Do not copy and paste the code as-is.
So let's start off by describing how the data has been made flexible. For this I created an interface that would be implemented by the data classes responsible for fetching the user from the User table.
public interface ILoginData<TEntity, TModel>
{
TEntity FindByUsername(TModel model);
} The TModel represents the user model that is passed into the function, and TEntity is the user that is returned from the User table. Both of these objects should be POCO (Plain Old C# Object) objects. Although I have specified a different type for the parameter and the return type, you could quite easily have the same object for both. I wanted to keep them separate so that the input and output objects can be different.
An example impementation of TModel could be as follows.
public class UserModel
{
public string Email { get; set; }
public string Password { get; set; }
} A similar definition can be used for the TEntity object too.
Any class that implements this interface therefore has to provide a mechanism for locating a user. This is the first step towards implementing our flexible login service.
An example class definition of a data class that implements our interface is as follows.
public class UsersData : ILoginData<UserEntity, UserModel>
{
public UserEntity FindByUsername(UserModel model)
{
UserEntity result = null;
if (!string.IsNullOrEmpty(model?.Email))
{
result = this.GetUserByEmail(model.Email);
}
return result;
}
} Next we need to implement a business service that invokes our data layer class. We need to implement another interface to ensure that each of our login classes each contain the same method.
public interface ILoginService<T>
{
T LoginUser(string username, string password);
} This is the actual login method that is being declared here. Each login class must implement a function called LoginUser that will return a user object. The parameters in this particular example are the username (email) and password, but your own function may take different parameters as required.
public class WebLoginService : ILoginService<UserEntity>
private readonly ILoginData<UserEntity, UserModel> _data;
public UserEntity LoginUser(string username, string password)
{
UserEntity result = null;
if (!string.IsNullOrEmpty(username) && !string.IsNullOrEmpty(password))
{
UserModel user = new UserModel
{
UserName = username,
Password = password,
};
result = this._data.FindByUsername(user);
if (result != null)
{
result.IsAuthenticated = string.CompareOrdinal(password, result.Password) == 0;
}
}
return result;
}
} Here's an example implementation of our interface. Note the line of code below.
result = this._data.FindByUsername(user); This is invoking our data class from earlier. We know that our data class contains this method because the reference to our data object is of type ILoginData<userentity, usermodel="">.
I have implemented two login classes using this design (a mobile app and a web app). It gives me the flexibility to substitute different models and entities into the code. I have also implemented the following constructor which allows me to pass in different implementations of the data class so that I can unit test the login functionality without having to touch the database.
public WebLoginService(ILoginData<UserEntity, UserModel> data)
{
this._data = data;
} I invoke this constructor from my unit tests which contains a different implementation of my data class.
This design can be used for implementing other functionality besides login code. In fact, it can be used whenever you have any dependancy between one class and another class. Instead of relying on concrete types, you are relying on an interface and coding to that instead. It gives you the ability to provide different types of models and entities, as well as fully supporting unit testing.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 24-Jan-18 5:20am.
|
|
|
|
|
I was talking with a friend a while ago, and he was lambasting his place of work for allowing a member of the team to ride roughshod over the rest of the development team. This particular individual had the final say in practically every technical decision and senior colleagues (managers) would hang onto his every word. They took everything that came out of his mouth as gospel. I asked how this could be. His reply was somewhat startling. He knew more than the managers, and by that token alone he was deemed a technical demagogue.
And that's all there was to it. By simply knowing more than those above him, he was afforded decision-making responsibility far above his station. If anyone else tried to criticise those decisions, they were undermined by either (or both) the individual in question or the managers.
The fault here lies with both the individual and the managers. The individual was overly confident in their technical prowess and didn't think they needed to seek the opinions of their peers. And the managers never bothered to ask if the opinions were representative of the group or just the individual. They were happy to simply go along with what they were being told. After all, he's the technical expert so he should know what he's talking about and we trust him implicitly to make those decisions.
Where this falls down, is that no one can know everything, so no matter how much you think you know, you don't know everything. There are always alternative ways of doing things, different ways of looking at the same problem. Hearing other voices and opinions is paramount to maintaining a healthy team. When one voice overshadows all the others quickly leads to a toxic working environment, not to mention bad decisions which may cost the business further down the line. Expecting a single individual to know everything (as the managers clearly did) is a dereliction of their managerial duty to both the rest of the team, and ultimately to the business.
You can't and don't run a highly performant business based on a few individuals. A business is analogous to a team, where everyone has a part to play. The combined knowledge and skills of a team will always easily surpass those of even the best individuals. A business succeeds by having strength in depth, and that strength comes in the form of the whole team, not just a selection of them (and especially not just one of them).
I have covered the topic of collaboration previously when I talked about software teams being either democracies or dictatorships. It is foolhardy in the extreme to take your opinions from mere individuals, no matter how talented you think they are. A good idea can come from anywhere and from anyone.
Instead of placing your trust squarely on a single individual, place it on the team as a whole. I guarantee you will see better results instantly.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
It's impossible to predict with any certainty what lies ahead in 2018, but there are probably a few things that I can predict with at least a modicum of certainty.
- Migrate the mobile apps to a new development platform. Unfortunately the platform we currently use for building, testing and deploying our apps to the app stores will be retiring in 2018. This means that we will need to migrate our apps to a new platform, and (depending on what technology we opt for) possibly have to rebuild them from scratch.
That said, we will only need to rebuild the UI for the apps, as all the business intelligence is delivered to our apps in the form of RESTful APIs. It is the RESTful APIs that provide all the business logic and services to the apps. The apps themselves are nothing more than simple front-ends that make calls to these RESTful APIs. Good design is a great thing.
So investigating alternative development platforms will almost certainly need to happen in 2018. What we won't be doing however, is making the mistake of going native. As I've made the point in another article, unless you have the resources, skills and a need for native development, then cross-platform is (usually) the sensible way to go.
Currently NativeScript is looking like a good option. It provides a truly native app, requires skills that are the same or similar to those needed for web development and has a great development ecosystem and workflow for development, testing and deployment. It also has the ability to do web builds and testing in the cloud, thus alleviating the need to keep your local development environment up-to-date.
- Building the replacement to our enterprise web application. This is the core fleet management system used both internally and externally by our clients. It provides a helicopter view of your fleet as well as the abiliy to drill down to any level of detail as required. It also provides all the reports which provide information to make business critical decisions. This is now getting a bit long in the tooth. With so many modern tools and frameworks around, it's time this was updated. We're looking at going for a rich client-side architecture (for example Angular), rather than a heavy server-side architecture (for example MVC).
Although we haven't made any concrete commitments, it's looking likely that Angular will be used for the next generation of development. This will be coupled with a suite of supporting RESTful APIs that will provide the business intelligence. This is a very similar architecture to the one we already use within our mobile apps, so we know it works. The front-end (Angular) will provide a rich user-experience to the user, but all the intelligence will be provided from the supporting RESTful APIs.
- Migrating ever more development related infrastructure to Azure. Much of the development of the mobile apps was heavily reliant on Azure services. This included an Azure service bus, web hosting, web jobs, functions, SQL and blob storage. More recently I've been making use of Azure's SendGrid service for providing email functionality to the apps. I am quite sure that during the course of 2018 I will continue to make use of Azure's excellent development platform. Have I mentioned how much I love Azure? I absolutely love Azure. I can honestly say that Azure is one of the best development technologies I have used in a very long time, and it makes me genuinely excited every time I use it.
I am sure there will be many more projects and work that has not been mentioned here, including many surprises throughout 2018. Some things will no doubt go wrong, but (hopefully) many more will go right. Such is the nature of software development.
Until the next time, have a wonderful Christmas and a very prosperous New Year.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from my previous article[^] where I described how I used the Azure SendGrid service to send emails, I will now describe how I created the emails themselves. I wanted some way of templating the emails so that I could use a standard layout containing placeholders for information that I would supply at run-time.
In the old days of word-processing, this was called a mail merge. You have a templated document where data is then supplied to fill in the blanks. Instead of a Word document I had an HTML document, but the same principle applies. The HTML document contained placeholders where I would supply data at run-time. These templated HTML documents are called Razor documents.
Razor is a language that lets you create document templates mixing static markup and code. Typically, the static markup is HTML and the code is C# or VB.NET. These can be as simple or as complex as you need. You have the full power of the .NET framework at your disposal, alongside HTML markup and CSS styling, so you can really go to town and create some amazing looking content. A full description of Razor is beyond the scope of this document, but there are planty of resources where you can dig deeper into this technology.
When I initially began looking into using Razor, I wanted the ability to create certain templated layouts using HTML, and then at run-time I would provide data to the layouts. This would allow me to create an HTML document that I could then set as the body of the email.
For the purposes of this article I will use the trivial Razor document I created for my unit test fixtures.
@model Common.Models.EmailRequests.UnitTestModel
<h1>This is a Unit Test Email Razor Template</h1>
<h2>User Details</h2>
<h3>Name: @Model.Name</h3>
<h3>Company: @Model.Company</h3>
<h3>Telephone: @Model.Telephone</h3>
<h2>Car Details</h2>
<h3>Registration: @Model.Registration</h3>
<h3>Description: @Model.Description</h3> And the corresponding model that is used to supply the data.
namespace Common.Models.EmailRequests
{
public class UnitTestModel
{
public string Registration { get; set; }
public string Description { get; set; }
public string Name { get; set; }
public string Forename { get; set; }
public string Surname { get; set; }
public string Telephone { get; set; }
public string Company { get; set; }
}
} All very simple and straight forward.
The next step then is to actually create a templated document containing real data. During my investigations I came across RazorEngine[^]. This is a templating engine built on Microsoft's Razor parsing engine and allows you to use Razor syntax to build dynamic templates. It gives you a layer of abstraction that makes using the Razor parsing engine extremely simple. It takes care of the necessary mundane chore of compiling and running your template. It can be installed into your Visual Studio project using the nuget package manager. Describing RazorEngine could take up an entire article on its own, so be sure to check out the linked Github page for further information about this excellent tool.
I wrote the following function which uses RazorEngine to create a fully populated Razor document using the specified Razor template and model.
public string RunCompile(string rootpath, string templatename, string templatekey, object model)
{
string result = string.Empty;
if (string.IsNullOrEmpty(rootpath) || string.IsNullOrEmpty(templatename) || model == null) return result;
string templateFilePath = Path.Combine(rootpath, templatename);
if (File.Exists(templateFilePath))
{
string template = File.ReadAllText(templateFilePath);
if (string.IsNullOrEmpty(templatekey))
{
templatekey = Guid.NewGuid().ToString();
}
result = Engine.Razor.RunCompile(template, templatekey, null, model);
}
return result;
} The above function returns a string containing the HTML markup for a fully transformed Razor document. Here's an example of what is returned.
<h1>This is a Unit Test Email</h1>
<h2>User Details</h2>
<h1>Name: Mr Unit Test</h1>
<h1>Company: Unit Test Company</h1>
<h1>Telephone: 01536536536</h1>
<h2>Car Details</h2>
<h1>Registration: UT01UNIT</h1>
<h1>Description: Ford Mustang</h1> This then becomes the body of the email.
An excellent article that I found very useful was this one[^]. Pay particular attention to Chapter 3 where the author describes the performance gains when caching is implemented. To load a Razor template, as with loading any resource, takes up valuable resources. It can be a relatively slow process if you always load up your templates from disk every time. Far better therefore to cache your Razor templates so that they can be loaded far quicker.
Although I have only described a very simple Razor template and model, the same principles can be applied to far more complex templates. RazorEngine really does take the hard work out of manipulating the Razor parsing engine and if considering using the Razor parsing engine then I would definitely suggest using RazorEngine.
I have now implemented a fully functioning HTML templating engine complete with massively scalable email to send out those emails thanks to Azure SendGrid and Razor.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
For the latest development of our mobile apps, we needed to replace the current service we use for sending out emails. The current email service is scheduled to go into retirement, leaving us with the task of replacing the email functionality on the apps. As we currently make extensive use of Azure for many of our other development services (service bus, webjobs, functions, blob and SQL storage) I thought I'd investigate to see if Azure provided an email service we could use. And sure enough it does.
The email service provided by Azure is called SendGrid. As with every other service provided by Azure, it has excellent integration with the .NET ecosystem. You configure your SendGrid service initially in your Azure portal. As part of this configuration you need to create an API key. It is this API key that you then provide your application when making SendGrid email requests in your code.
To integrate your application with Azure's SendGrid service, you also need to download and install the Azure SendGrid nuget package. Once installed, you can start sending emails from your application.
Example code for using the Azure SendGrid email service
The following code has been taken from a very simple console application I created as a proof of concept.
var msg = new SendGridMessage();
msg.SetFrom(new EmailAddress(joe.bloggs@company.co.uk, "Development Manager"));
var recipients = new List<EmailAddress>
{
new EmailAddress(fred.smith@company.co.uk, "Software Developer")
};
msg.AddTos(recipients);
msg.SetSubject("Please ignore - Testing the SendGrid C# Library");
msg.AddContent(MimeType.Html, "<p>Hello World!</p>");
var client = new SendGridClient("place_your_api_key_here");
var response = await client.SendEmailAsync(msg);
Console.WriteLine($"Response from SendGrid demo email: {response.StatusCode}");
Console.WriteLine("Press any key to finish.......");
Console.ReadKey(); It really is as easy as that!
With just a few lines of code you have your own email service from which to send your application's emails. At the time of writing you get 35k emails per month for free before you incur any costs. So unless you are heavily into marketing email campaigns, this should be sufficient for most needs.
I've said it before, and I'll keep saying it. Azure is one of the greatest development platforms I have had the pleasure of using. Setting up and configuring SendGrid was very easy and there is plenty of online documentation and examples.
In a future article I'll describe how I used Razor to provide templating functionality for the emails that we send using Azure SendGrid. Until then, happy coding.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I've spent the previous few years now building apps for various enterprises. These have generally been data entry apps and included the fleet management sector and enforcement officers (formerly called bailiffs). Most enterprise apps tend to be fairly simple and mainly allowing CRUD operations to be performed.
In such cases, x-platform development is the obvious choice of development. Most enterprise apps don't require native functionality and most perform fairly un-sophisticated data entry.
The key question then is when to go native and when to go x-platform when building an app for the enterprise.
When to use x-platform technology.
- Your app doesn't require native device functionality either now or in the forseeable future. Don't build native "just because you might need it later". That is breaking the YAGNI (you ain't gonna need it) principle of software development. You are adding substantially to your development costs for something that may never happen. If you have designed and developed your app using good separation of concerns, then it shouldn't be an onerous task to build the front-end of the app natively in the future if requirements change. If you app is poorly designed, then obviously changing across to native later on will incur more significant development costs.
- Your app requirements are fairly simple. If you are developing an app that will allow users to perform simple CRUD operations then this doesn't require native development. X-platform tooling builds these sorts of apps easily. Input forms and grids are easily achieved. Building a simple CRUD app natively makes no sense at all. The marginal gains in performance and UI / UX will be completely over-shadowed by the more than significant increased development costs.
- You have a small development team and don't have the resources to build native apps. Unless you are a large development team such as Facebook with the required specialist skills for developing native apps, then building x-platform allows you to build, test and ultimately release multiple versions of the app simultaneously i.e. to the Apple and Google stores at the same time. Far too many times I've heard the phrase "We have an app for platform X but not for Y. We're still working on Y". Unless you have the resources and skills to release to all your intended platforms at the same time, then chances are you've made the wrong technical decision.
Making the wrong choice with regards to your mobile app can be costly. You are effecively doubling your development resources both in terms of time and cost. These are not trivial costs. Unless you have a specific reason for building your app natively, then you should seriously consider going x-platform. There are many options to take. Some of the more current x-platform tools even build native UI controls for the target device, giving the end user an almost identical experience.
There are obviously very good reasons for building your app natively, but in my experience, general purpose data entry apps for the enterprise don't meet those requirements. In such cases, x-platform will be the better choice. Even if you have the necessary skills and a team capable of building such an app, it still doesn't mean that you you should. And if you don't have the necessary team size and skills, then you almost certainly shouldn't go native unless you can afford to have this work outsourced.
Before deciding what tools and technologies to use when building your next enterprise app, be sure to very carefully consider the costs and benefits involved. Making the wrong decision can be costly.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I stumbled across a tweet recently that got me thinking about how we apply consistency when building software applications. I think most software developers would agree that consistency is by and large a good thing. Being consistent helps us learn and understand how the code works. Code in one part of an application will work in a similar fashion to code in other parts of the application as they are "consistent". So if we already understand how one part of the application works, we will more quickly understand how other areas work. We can then extend this analogy across different applications, domains and even technologies.
However, we shouldn't slavishly follow these patterns just for the sake of consistency. We also need to bear in mind what is appropriate. What may have worked and been appropriate in one part of the application may not be appropriate for other parts of the application. In such cases it is perfectly acceptable to be inconsistent.
Like standards, being consistent sets out guidelines and general modes of operation and structure. These enable us to develop our applications in such a way that they reuse modes of operation and structure that went before. But that doesn't necessarily imply that all future development going forwards will benefit from these modes of operation and structure. In fact, the exact opposite may be true.
Balance is needed. Whilst consistency is certainly a good thing, doing it for its own sake at the expense of what is appropriate will lead to poorly constructed software. And this is where experience comes into play. To be able to weigh up the pros and cons of each possible solution, and find the one that fits best. There is no rule of thumb here. Where consistency and appropriateness trade off each other will depend entirely on the merits of the specifics of the application.
So when building that shiny new application, just bear in mind that you aren't slavishly being consistent, and that you need to balance consistency with what is appropriate.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have recently been designing the RESTful API's for a new application I'm involved in building. There are likely to be dozens or even hundreds of API's required by the application once it is fully complete. My goal when thinking about designing these RESTful API's was how to implement them in such a way as to reduce the exposed surface area to the client, such that fewer RESTful API's would be required to fulfill all service requests.
My thought process got me wondering if a single RESTful endpoint would be sufficient to handle all CRUD operations. This would need to handle multiple data types such as users, quotes, vehicles, purchase orders etc (the application is aimed at the fleet management sector) for the application. Usually a single endpoint would be created for each of the different data types i.e. a single RESTful endpoint for handling all driver CRUD operations, a single RESTful endpoint for handling all vehicle CRUD operations.
As I stated previously though, I wanted to design the RESTful API's in such a way as to reduce the exposed surface area and therefore try to perform all these CRUD operations using a single RESTful API.
After some trial and error, I got this working using what turned out to be a simple design pattern. I'll explain the design pattern for the GET (read) operations, and leave the others as an exercise for the reader to work out.
For each GET operation I pass two parameters. The first parameter identifies the type of query that is required and is a unique string identifier. It can hold values such as "getuserbyemail", "getuserpermissions", "getallusers". The second parameter is a JSON array structure containing key-value pairs of the values needed to fulfill the GET operation. As such it can contain a user's email address, a user's ID, a vehicle registration and so on.
Example JSON array structure.
{"QuerySearchTerms":{"email":"test@mycompany.co.uk"}} The code for the GET request receives these two parameters on the querystring. After some initial validation checks (such as ensuring the request is authorised, time-bound and that both parameters are valid), it then processes the request.
The first querystring parameter informs the RESTful API what type of request is being made, and so therefore what elements to extract from the JSON array (which is the second querystring parameter). Here is the array structure that is passed to the GET request implemented in C#. This array structure can easily be (de)serialised and passed as a string parameter to the request.
[DataContract]
public class WebQueryTasks
{
[DataMember]
public Dictionary<string, object> QuerySearchTerms { get; set; }
public WebQueryTasks()
{
this.QuerySearchTerms = new Dictionary<string, object>();
}
} Here is the skeleton code for the GET request. For clarity I have removed the logging, the validation checks and kept the code as simple as possible.
public string WebGetData(string queryname, string queryterms)
{
try
{
WebQueryTasks query = ManagerHelper.SerializerManager().DeserializeObject<WebQueryTasks>(queryterms);
if (query == null || !query.QuerySearchTerms.Any())
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to deserialise search terms.")));
}
object temp;
string webResults;
switch (queryname.ToLower())
{
case WebTasksTypeConstants.GetCompanyByName:
webResults = this._userService.GetQuerySearchTerm("name", query);
if (!string.IsNullOrEmpty(webResults))
{
temp = this._companiesService.Find(webResults);
}
else
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to locate query search term(s).")));
}
break;
case WebTasksTypeConstants.GetUserByEmail:
webResults = this._userService.GetQuerySearchTerm("email", query);
if (!string.IsNullOrEmpty(webResults))
{
temp = this._userService.FindByEmail(webResults);
}
else
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to locate query search term(s).")));
}
break;
default:
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest,
new HttpError($"Unknown query type {queryname}.")));
}
var result = ManagerHelper.SerializerManager().SerializeObject(temp);
return result;
}
catch (Exception ex)
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Exception servicing request.")));
}
} I have applied the same design pattern to all the requests (POST, PUT, GET and DELETE). I pass in the same two parameters on the querystring, and the RESTful API determines what needs to be processed, and fetches the relevant values from the JSON array to process it. All data is returned in JSON format.
I have found this design pattern to be extremely flexible, extensible and easy to work with. It allows for all / any type of request to be made in a very simple manner. I have impemented full CRUD operations on a number of different data types all without a problem using this design pattern.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Software teams come in many different shapes and sizes, and I have probably worked with most of them at one time or another in my nearly twenty years of working in software. One particular dynamic that I have come across in software teams is in where the decision making responsibility lies. In a true democracy, every member of the team is involved in making decisions. Every member of the team brings with them a unique blend of skills and knowledge, and this ensures that decisions will be made across as wide a spectrum as possible. It also ensures that every one feels valued, and that their opinion has been considered in the decision making process. To form part of the decision making process you must therefore be fully involved with current events, their ramifications and likely impact on the team. In short, every member of the team needs to be fully engaged.
This is how self-organising teams are born. Having worked (and continue to work) in such teams, I personally find these to be the most efficacious and highly performant. Opinions are sought from a wide range of individuals, thus limiting the chances that an unsuitable or poorly formed decision will be made.
Contrast this with a dictatorship. This is where the majority of decisions are made by a single individual within the team. Usually this will be a senior software developer within the team who has good knowledge of the applications, tools and technologies. As good as this individual may be, they are no match for the combined skills and knowledge of the entire team. No single member of the team can know everything (no matter how much they may believe this). There is no place for vanity and arrogance on a software team. As they say, pride becomes before a fall.
These teams are ultimately born out of a failure of management. There are insufficient checks and balances in place to ensure that a wide range of opinions are sought before decisions are made. And whilst some decisions may be the right ones, there will be many that are ill considered or just plain wrong because the dictator failed to solicit the rest of the team for opinions. This is as much a fault of the management as it is the dictator.
Unfortunately I have worked in such dictatorial teams previously. No single developer should be in sole charge of decision making responsibility for an entire team. Opinions should be sought from across the team, as every one's contribution is important.
In the same way as political dictatorships cannot match democratic ones, then dictatorial software teams are no match for democratic ones. Self organising teams are never born out of dictatorships, they are always born from democracies.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
A pattern I came across a few years ago for updating data is to use what is called an UpSert stored procedure. An UpSert stored procedure combines the insertion of new rows with updating them. Rather than have two stored procedures, one for inserting and one for updating, you simply have one that does both.
The benefits is that this leads to application code that doesn't need to concern itself with determining whether a particular entity exists or not. Instead of writing code to determine whether a particular entity exists in the table or not, and then calling the insert or update stored procedure as appropriate, you simply invoke the UpSert stored procedure and let the UpSert stored procedure determine whether to insert or update the table.
Why write application code to do this, when your database can do this magnitudes of times faster. Here's an example of how an UpSert stored procedure works.
-- =============================================
-- Author: Dominic Burford
-- Create date: 21/09/2017
-- Description: Upsert a user
-- =============================================
CREATE PROCEDURE [dbo].[Users_Upsert]
@username VARCHAR(128),
@email VARCHAR(128)
AS
BEGIN
-- are we inserting a new record or updating an existing one?
SELECT ID FROM Users
WHERE Email = @email
IF @@ROWCOUNT = 0
BEGIN
INSERT INTO Users
(
UserName,
Email
)
VALUES
(
@username,
@email
)
END
ELSE
BEGIN
UPDATE Users
SET UserName = @username,
Updated = GETDATE()
WHERE Email = @email
END
END
GO This pattern also works well with RESTful APIs. Whenever you want to insert / update data, you don't need to write code that determines if the entity exists and then invoke the appropriate POST or PUT method, your code will always be an HTTP POST. This leads to far cleaner and simplified code. It also works well with service bus architectures where you don't care about the type of update you are performing, as it's just a fire-and-forget call to the database.
The resulting code will also be quicker, as you have delegated the responsibility for determining if an entity exists or not to the database, which obviously can make such a judgement many times faster than your application code.
I use this pattern frequently throughout my applications, and particularly when designing and developing RESTful APIs. The pattern can be used in practically any application though, as I use the same pattern in web apps, mobile apps and console apps.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of our build process we run several hundred unit tests. Once these have completed execution, we then run code coverage analysis. This gives us a raw figure of the percentage of the code that has been exercised by the unit tests. Currently this is running at over 90% code coverage.
Even if we had 100% code coverage, this doesn't mean the code is immunue to faults. Whilst having 100% code coverage is a good figure to aim for, it doesn't imply that your unit tests have tested your entire codebase. How can this be? Surely having 100% code coverage means you have exercised every line of code? In fact this is where obsession over code coverage can lead to an over confidence in your testing strategy.
Here's a simple example.
int counter = GetNewCounterValue();
if (counter == 0)
{
} In the example above, we can easily write a single unit test that will exercise all lines of code. We just ensure that when we arrange our unit test we inject a value of zero into the test harness. By doing so, our unit test will enter the if condition and exercise all lines of code. But what about the implicit else condition. Shouldn't we test that also? The answer is of course, yes we should. So we also need to write another unit test that injects a non-zero value into the test harness. So although our first test exercised all lines of code and therefore gave us 100% code coverage, we needed two tests to give us full conditional (branch) coverage.
This is where using code coverage alone can be a blunt tool. It is a useful indicator, and can be used to measure relative code coverage between different parts of the code. For example, it can be useful to see where your unit tests are weak, and where they are strong (relative to each other). But code coverage shouldn't be used as an absolute value on its own. In isolation it is pretty meaningless. It's real value comes when used to give comparative measurements of code coverage throughout the codebase.
It's also important to know the critical areas of the code, and ensure that these areas have adequate testing coverage. For example, it's probably important that your login functionality is adequately tested, as this is critical to the security of the application. So you probably want to invest more time and effort in ensuring that these critical areas of the code are tested more thoroughly than other lesser critical areas of the code. Not all areas of the code are equal. So not all tests are equal either.
So whilst it's important to have unit tests, it's also important to ensure that you spend time ensuring that all branches of the code are covered (not just the lines of code), and that the more critical areas of code have adequate testing coverage relative to other lesser areas of the code.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|