|
I've spent the previous few years now building apps for various enterprises. These have generally been data entry apps and included the fleet management sector and enforcement officers (formerly called bailiffs). Most enterprise apps tend to be fairly simple and mainly allowing CRUD operations to be performed.
In such cases, x-platform development is the obvious choice of development. Most enterprise apps don't require native functionality and most perform fairly un-sophisticated data entry.
The key question then is when to go native and when to go x-platform when building an app for the enterprise.
When to use x-platform technology.
- Your app doesn't require native device functionality either now or in the forseeable future. Don't build native "just because you might need it later". That is breaking the YAGNI (you ain't gonna need it) principle of software development. You are adding substantially to your development costs for something that may never happen. If you have designed and developed your app using good separation of concerns, then it shouldn't be an onerous task to build the front-end of the app natively in the future if requirements change. If you app is poorly designed, then obviously changing across to native later on will incur more significant development costs.
- Your app requirements are fairly simple. If you are developing an app that will allow users to perform simple CRUD operations then this doesn't require native development. X-platform tooling builds these sorts of apps easily. Input forms and grids are easily achieved. Building a simple CRUD app natively makes no sense at all. The marginal gains in performance and UI / UX will be completely over-shadowed by the more than significant increased development costs.
- You have a small development team and don't have the resources to build native apps. Unless you are a large development team such as Facebook with the required specialist skills for developing native apps, then building x-platform allows you to build, test and ultimately release multiple versions of the app simultaneously i.e. to the Apple and Google stores at the same time. Far too many times I've heard the phrase "We have an app for platform X but not for Y. We're still working on Y". Unless you have the resources and skills to release to all your intended platforms at the same time, then chances are you've made the wrong technical decision.
Making the wrong choice with regards to your mobile app can be costly. You are effecively doubling your development resources both in terms of time and cost. These are not trivial costs. Unless you have a specific reason for building your app natively, then you should seriously consider going x-platform. There are many options to take. Some of the more current x-platform tools even build native UI controls for the target device, giving the end user an almost identical experience.
There are obviously very good reasons for building your app natively, but in my experience, general purpose data entry apps for the enterprise don't meet those requirements. In such cases, x-platform will be the better choice. Even if you have the necessary skills and a team capable of building such an app, it still doesn't mean that you you should. And if you don't have the necessary team size and skills, then you almost certainly shouldn't go native unless you can afford to have this work outsourced.
Before deciding what tools and technologies to use when building your next enterprise app, be sure to very carefully consider the costs and benefits involved. Making the wrong decision can be costly.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I stumbled across a tweet recently that got me thinking about how we apply consistency when building software applications. I think most software developers would agree that consistency is by and large a good thing. Being consistent helps us learn and understand how the code works. Code in one part of an application will work in a similar fashion to code in other parts of the application as they are "consistent". So if we already understand how one part of the application works, we will more quickly understand how other areas work. We can then extend this analogy across different applications, domains and even technologies.
However, we shouldn't slavishly follow these patterns just for the sake of consistency. We also need to bear in mind what is appropriate. What may have worked and been appropriate in one part of the application may not be appropriate for other parts of the application. In such cases it is perfectly acceptable to be inconsistent.
Like standards, being consistent sets out guidelines and general modes of operation and structure. These enable us to develop our applications in such a way that they reuse modes of operation and structure that went before. But that doesn't necessarily imply that all future development going forwards will benefit from these modes of operation and structure. In fact, the exact opposite may be true.
Balance is needed. Whilst consistency is certainly a good thing, doing it for its own sake at the expense of what is appropriate will lead to poorly constructed software. And this is where experience comes into play. To be able to weigh up the pros and cons of each possible solution, and find the one that fits best. There is no rule of thumb here. Where consistency and appropriateness trade off each other will depend entirely on the merits of the specifics of the application.
So when building that shiny new application, just bear in mind that you aren't slavishly being consistent, and that you need to balance consistency with what is appropriate.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I have recently been designing the RESTful API's for a new application I'm involved in building. There are likely to be dozens or even hundreds of API's required by the application once it is fully complete. My goal when thinking about designing these RESTful API's was how to implement them in such a way as to reduce the exposed surface area to the client, such that fewer RESTful API's would be required to fulfill all service requests.
My thought process got me wondering if a single RESTful endpoint would be sufficient to handle all CRUD operations. This would need to handle multiple data types such as users, quotes, vehicles, purchase orders etc (the application is aimed at the fleet management sector) for the application. Usually a single endpoint would be created for each of the different data types i.e. a single RESTful endpoint for handling all driver CRUD operations, a single RESTful endpoint for handling all vehicle CRUD operations.
As I stated previously though, I wanted to design the RESTful API's in such a way as to reduce the exposed surface area and therefore try to perform all these CRUD operations using a single RESTful API.
After some trial and error, I got this working using what turned out to be a simple design pattern. I'll explain the design pattern for the GET (read) operations, and leave the others as an exercise for the reader to work out.
For each GET operation I pass two parameters. The first parameter identifies the type of query that is required and is a unique string identifier. It can hold values such as "getuserbyemail", "getuserpermissions", "getallusers". The second parameter is a JSON array structure containing key-value pairs of the values needed to fulfill the GET operation. As such it can contain a user's email address, a user's ID, a vehicle registration and so on.
Example JSON array structure.
{"QuerySearchTerms":{"email":"test@mycompany.co.uk"}} The code for the GET request receives these two parameters on the querystring. After some initial validation checks (such as ensuring the request is authorised, time-bound and that both parameters are valid), it then processes the request.
The first querystring parameter informs the RESTful API what type of request is being made, and so therefore what elements to extract from the JSON array (which is the second querystring parameter). Here is the array structure that is passed to the GET request implemented in C#. This array structure can easily be (de)serialised and passed as a string parameter to the request.
[DataContract]
public class WebQueryTasks
{
[DataMember]
public Dictionary<string, object> QuerySearchTerms { get; set; }
public WebQueryTasks()
{
this.QuerySearchTerms = new Dictionary<string, object>();
}
} Here is the skeleton code for the GET request. For clarity I have removed the logging, the validation checks and kept the code as simple as possible.
public string WebGetData(string queryname, string queryterms)
{
try
{
WebQueryTasks query = ManagerHelper.SerializerManager().DeserializeObject<WebQueryTasks>(queryterms);
if (query == null || !query.QuerySearchTerms.Any())
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to deserialise search terms.")));
}
object temp;
string webResults;
switch (queryname.ToLower())
{
case WebTasksTypeConstants.GetCompanyByName:
webResults = this._userService.GetQuerySearchTerm("name", query);
if (!string.IsNullOrEmpty(webResults))
{
temp = this._companiesService.Find(webResults);
}
else
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to locate query search term(s).")));
}
break;
case WebTasksTypeConstants.GetUserByEmail:
webResults = this._userService.GetQuerySearchTerm("email", query);
if (!string.IsNullOrEmpty(webResults))
{
temp = this._userService.FindByEmail(webResults);
}
else
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Unable to locate query search term(s).")));
}
break;
default:
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest,
new HttpError($"Unknown query type {queryname}.")));
}
var result = ManagerHelper.SerializerManager().SerializeObject(temp);
return result;
}
catch (Exception ex)
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.BadRequest, new HttpError("Exception servicing request.")));
}
} I have applied the same design pattern to all the requests (POST, PUT, GET and DELETE). I pass in the same two parameters on the querystring, and the RESTful API determines what needs to be processed, and fetches the relevant values from the JSON array to process it. All data is returned in JSON format.
I have found this design pattern to be extremely flexible, extensible and easy to work with. It allows for all / any type of request to be made in a very simple manner. I have impemented full CRUD operations on a number of different data types all without a problem using this design pattern.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Software teams come in many different shapes and sizes, and I have probably worked with most of them at one time or another in my nearly twenty years of working in software. One particular dynamic that I have come across in software teams is in where the decision making responsibility lies. In a true democracy, every member of the team is involved in making decisions. Every member of the team brings with them a unique blend of skills and knowledge, and this ensures that decisions will be made across as wide a spectrum as possible. It also ensures that every one feels valued, and that their opinion has been considered in the decision making process. To form part of the decision making process you must therefore be fully involved with current events, their ramifications and likely impact on the team. In short, every member of the team needs to be fully engaged.
This is how self-organising teams are born. Having worked (and continue to work) in such teams, I personally find these to be the most efficacious and highly performant. Opinions are sought from a wide range of individuals, thus limiting the chances that an unsuitable or poorly formed decision will be made.
Contrast this with a dictatorship. This is where the majority of decisions are made by a single individual within the team. Usually this will be a senior software developer within the team who has good knowledge of the applications, tools and technologies. As good as this individual may be, they are no match for the combined skills and knowledge of the entire team. No single member of the team can know everything (no matter how much they may believe this). There is no place for vanity and arrogance on a software team. As they say, pride becomes before a fall.
These teams are ultimately born out of a failure of management. There are insufficient checks and balances in place to ensure that a wide range of opinions are sought before decisions are made. And whilst some decisions may be the right ones, there will be many that are ill considered or just plain wrong because the dictator failed to solicit the rest of the team for opinions. This is as much a fault of the management as it is the dictator.
Unfortunately I have worked in such dictatorial teams previously. No single developer should be in sole charge of decision making responsibility for an entire team. Opinions should be sought from across the team, as every one's contribution is important.
In the same way as political dictatorships cannot match democratic ones, then dictatorial software teams are no match for democratic ones. Self organising teams are never born out of dictatorships, they are always born from democracies.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
A pattern I came across a few years ago for updating data is to use what is called an UpSert stored procedure. An UpSert stored procedure combines the insertion of new rows with updating them. Rather than have two stored procedures, one for inserting and one for updating, you simply have one that does both.
The benefits is that this leads to application code that doesn't need to concern itself with determining whether a particular entity exists or not. Instead of writing code to determine whether a particular entity exists in the table or not, and then calling the insert or update stored procedure as appropriate, you simply invoke the UpSert stored procedure and let the UpSert stored procedure determine whether to insert or update the table.
Why write application code to do this, when your database can do this magnitudes of times faster. Here's an example of how an UpSert stored procedure works.
-- =============================================
-- Author: Dominic Burford
-- Create date: 21/09/2017
-- Description: Upsert a user
-- =============================================
CREATE PROCEDURE [dbo].[Users_Upsert]
@username VARCHAR(128),
@email VARCHAR(128)
AS
BEGIN
-- are we inserting a new record or updating an existing one?
SELECT ID FROM Users
WHERE Email = @email
IF @@ROWCOUNT = 0
BEGIN
INSERT INTO Users
(
UserName,
Email
)
VALUES
(
@username,
@email
)
END
ELSE
BEGIN
UPDATE Users
SET UserName = @username,
Updated = GETDATE()
WHERE Email = @email
END
END
GO This pattern also works well with RESTful APIs. Whenever you want to insert / update data, you don't need to write code that determines if the entity exists and then invoke the appropriate POST or PUT method, your code will always be an HTTP POST. This leads to far cleaner and simplified code. It also works well with service bus architectures where you don't care about the type of update you are performing, as it's just a fire-and-forget call to the database.
The resulting code will also be quicker, as you have delegated the responsibility for determining if an entity exists or not to the database, which obviously can make such a judgement many times faster than your application code.
I use this pattern frequently throughout my applications, and particularly when designing and developing RESTful APIs. The pattern can be used in practically any application though, as I use the same pattern in web apps, mobile apps and console apps.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As part of our build process we run several hundred unit tests. Once these have completed execution, we then run code coverage analysis. This gives us a raw figure of the percentage of the code that has been exercised by the unit tests. Currently this is running at over 90% code coverage.
Even if we had 100% code coverage, this doesn't mean the code is immunue to faults. Whilst having 100% code coverage is a good figure to aim for, it doesn't imply that your unit tests have tested your entire codebase. How can this be? Surely having 100% code coverage means you have exercised every line of code? In fact this is where obsession over code coverage can lead to an over confidence in your testing strategy.
Here's a simple example.
int counter = GetNewCounterValue();
if (counter == 0)
{
} In the example above, we can easily write a single unit test that will exercise all lines of code. We just ensure that when we arrange our unit test we inject a value of zero into the test harness. By doing so, our unit test will enter the if condition and exercise all lines of code. But what about the implicit else condition. Shouldn't we test that also? The answer is of course, yes we should. So we also need to write another unit test that injects a non-zero value into the test harness. So although our first test exercised all lines of code and therefore gave us 100% code coverage, we needed two tests to give us full conditional (branch) coverage.
This is where using code coverage alone can be a blunt tool. It is a useful indicator, and can be used to measure relative code coverage between different parts of the code. For example, it can be useful to see where your unit tests are weak, and where they are strong (relative to each other). But code coverage shouldn't be used as an absolute value on its own. In isolation it is pretty meaningless. It's real value comes when used to give comparative measurements of code coverage throughout the codebase.
It's also important to know the critical areas of the code, and ensure that these areas have adequate testing coverage. For example, it's probably important that your login functionality is adequately tested, as this is critical to the security of the application. So you probably want to invest more time and effort in ensuring that these critical areas of the code are tested more thoroughly than other lesser critical areas of the code. Not all areas of the code are equal. So not all tests are equal either.
So whilst it's important to have unit tests, it's also important to ensure that you spend time ensuring that all branches of the code are covered (not just the lines of code), and that the more critical areas of code have adequate testing coverage relative to other lesser areas of the code.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Well it's been a long time coming, but the app that I've been working on has finally been released to the app stores. It was released to the Apple and Google app stores this morning. We'll inform our customers this week, and that's when we'll start to see the fruits of our labours (hopefully get some positive feedback).
The app has been developed using Telerik Platform in conjunction with web technologies such as HTML, CSS and Javascript. The front-end controls use Kendo UI and implement the MVVM pattern for binding to the respective Javascript properties. The app employs Apache Cordova for cross-platform developoment, allowing us to target multiple mobile platforms from the one codebase.
All functionality is served to the app via ASP.NET Web API RESTful services hosted on Azure. These Azure services employ an Azure Service Bus to ensure scalability and responsiveness to the user. The app itself contains no logic or functionality of its own (nor should it). All the business rules and data are served via services.
During the testing lifecycle, many issues were discovered and fixed. Some small, some not so small. The testing cycle took several months as many people from around the business were involved in the testing. And of course, there were the usual last minute changes to consider too (can we have that in blue).
The app itself has been developed for the Fleet Management sector and allows registered drivers to perform such tasks as updating their mileage, request a booking, MOT or service. They can submit vehicle inspections, contact their account manager, contact us in the event of a breakdown and many other driver related services.
But at long last, the app has hit the app stores and can be downloaded and installed. To use the app however, you need an account on our system, otherwise there will be no way for you to login. It's been a long time coming, but at last the app has been released (some people might say it escaped!).
Time to get on with other things now
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When you're driving down the street, you presumably always err on the side of caution. When waiting to exit a junction or roundabout, you wait until it's safe to pull out into the traffic, even if the approaching car may be indicating to turn into your junction. As you approach a junction and you see a car waiting to pull out, you instinctively keep a cautious eye open in case the car suddenly decides to pull out.
When you learn to drive a car, you are taught to drive defensively, and most people continue to drive this way throughout their lives. Expect the unexpected, never assume anything or take anything for granted. When we drive we assume that everyone around us can potentially and possibly make mistakes.
By the same token, Defensive Programming works along the same lines. Your code should always expect the unexpected, and never make any assumptions. This is the he(art) of Defensive Programming.
Wikipedia: Defensive programming is a form of defensive design intended to ensure the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety or security is needed.
Never trust input from a user or external application. Both of these are completely outside of the control of the programmer, and so therefore you have absolutely no control over what will be input. If you don't control the input, then you must assert for its validity.
Never assume the user will enter valid input. In fact, it is far safer to assume the exact opposite. Assume the user will enter complete garbage and ensure this garbage is rejected by the application with a suitable error message given. This already immediately makes the application more robust by protecting it against users entering garbage for input. If the user is supposed to enter a date, ensure that this is all they can enter. If the user is supposed to enter a number, ensure this is all they can enter. I'm sure you get the idea. The same rule applies to inputs coming from external systems. If your application integrates with another system, ensure that any inputs are stringently asserted beforehand.
Use
Assert() statements to ensure the inputs meet the expected types and values, and only proceed if they do. Otherwise throw a meaningful error. Be as strict as necessary. Only values that meet the exact format expected of the application should be allowed through. Everything else should be rejected. Instead of asserting for what is invalid, instead assert for what is valid, and process accordingly. There is probably a much longer list of ways that something can be wrong, than there is for ways in which it can be right. So it is probaby better to assert for those cases.
Before consuming a resource, ensure that the resource is valid. For example, when accessing data from a database connection, check that the database connection is valid and is open.
SqlConnection connection = new SqlConnection("myDatabaseConnectionString");
connection.Open();
SqlCommand cmd = new SqlCommand("myStoredProcedure", connection) When reading values from a database, don't assume that there is any data and don't assume that the data is valid. There may be no data or the values returned may be null for example.
SqlDataReader reader = cmd.ExecuteReader();
string firstName= reader["firstName"]; Don't assume that a variable contains a valid value, or that is has even been properly initialised.
MyMoneyClass money;
decimal funds = money.GetFunds(); Defensive Programming is a minsdset as much it is a programming paradigm. Assume nothing, assert everything, expect the unexpected.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 29-Jun-17 5:03am.
|
|
|
|
|
I'm celebrating my first year at Grosvenor as a Senior Software Engineer. The year has gone by so quickly that you don't often take the time to sit back and savour your accomplishments. I thought I would break this rule and do just that. So here is a (very) brief summary of just a few of the key projects I've been involved in during the previous year, and my first year at Grosvenor.
I've been involved with their mobile app development. This gave me my first introduction to Telerik Platform. This is a cross-platform mobile development platform that uses Apache Cordova. I've previously used Xamarin for such things, so it made a nice change to learn a new mobile platform technology. Under the covers, Telerik Platform uses web technology i.e. HTML, CSS and Javascript. The UI controls are built using the Kendo UI framework and implemement the MVVM pattern to bind the UI controls to the corresponding Javscript properties. Having previously used the MVC design pattern with ASP.NET is was nice to use a different pattern. I have to say that I found the MVVM pattern very simple and straight-forward to use.
I introduced DevOps using Team Foundation Server (TFS). I setup and configured builds for the key applications, implementing continuous integration and continuous deployment as part of these processes. There are different endpoints for development, staging and production, each with its own TFS deployment configuration. The build processes are quite complex, involving over a dozen separate build tasks. We now have uniform and consistent builds across all products across the business. Whereas previously these applications were built and deployed manually by a developer, now this process is entirely automated. This ensures consistency between builds, not to mention simplifying the process and reducing the manual burden on the developers.
I also introduced unit-testing into the software development life-cycle. This has been a major change to the way software is developed. Unit tests are used for both development as well as the build process. All new code must have associated unit tests, and these need to be checked in as part of the build process. The build process performs a code-coverage analysis, giving detailed reporting of the areas of code that are not covered by unit tests. The minimum code coverage is 70%. At the time of writing the code coverage across the application is over 90%.
None of these processes existed until I implemented them
When the decision was made to re-develop the mobile app offering, the key functionality driving the new proposition was to integrate the mobile app with the enterprise applications at the back-end. To achieve this required architecting an entire suite of ASP.NET Web API RESTful services utilising a service bus architecture. All the RESTful services consume and return data in JSON format. The architecture is highly scalable and available due in large part to the fact that it makes substantial use of many Azure services including hosting, service bus, functions, webjobs and a SQL database. As the implementer and architect of this process, I feel immensely proud.
I've thoroughly enjoyed the challenges, projects and the people I have worked with over the previous year and I hope the next year brings many more challenges.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I was recently reading an article on this subject which included feedback from other software architects. What was interesting was the lack of consensus on the topic. There were quite a few strong opinions raised on both sides of the discussion.
As a professional software architect, should you also write code? The argument goes, that if you aren't writing code, you become increasingly detached from the applications you are designing and architecting. This leads to the Architecture Astronaut[^] which was first coined by Joel Spolsky back in 2001. The Architecture Astronaut constantly tries to think in higher and higher (and increasingly less relevant) abstractions. The end result is that the role performed by those particular architects is redundant.
The counter argument is that by continuing to write code, you keep your development skills up-to-date and therefore maintain a greater degree of relevance. After all, to be a good software architect, you also need to know how to implement good software systems right?
I must say I'm quite divided on the subject. I definitely agree that as a software architect there is definitely merit to be gained from continuing to hone your development skills and ensure that these are kept up-to-date. However, is it necessary to write production code to do this? Keeping your skills relevant and up-to-date is one thing, but shipping production strength code is another.
A software architect doesn't write code in the same quantity as the software developer. This should be fairly obvious. If your primary function within the organisation is software architect, then you will naturally spend most of your time on architecture related activities. If your primary function is software developer, then you will spend most of your time on development related activities.
So it should come as no surprise that the software architect who specialises in architecture, should be better at architecture than they are as a developer. And conversely, the software developer should be better at development than they are at architecture.
So I would conclude that a software architect should most definitely keep their development skills relevant and current, but that this shouldn't necessarily involve writing code that is going to ship to a paying customer(s). I'm sure there are many less critical applications (such as internal applications) that would allow the software architect to keep their skills current, without compromising the quality and integrity of customer facing applications.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Critical to any business is its data. Data is king. So it's vitally important to ensure you have a plan to restore your data should anything untoward happen to it. From accidental user error, to application error, to an outage, fire or flood. There are many ways in which data can be lost or its integrity compromised.
So it's important to ensure you have regular backups, and that you perform regular restores of that data. After all, you don't want to find out after you have lost all your data, that there is a problem with your backup making it impossible to restore it.
If you are using Azure SQL Database (ASD) for your data storage, there are a range of options available to you. I won't go through all of them, as there are plenty of articles online already. I'll just describe the options I have chosen for our particular application and business needs.
ASD provides several business continuity features including automated backups and optional database replication. Each type of business continuity feature has different characteristics for estimated recovery time (ERT) and potential data loss for recent transactions. Understanding these ensures you can make an informed decision with regards to the needs of the business.
The business continuity needs of the business will depend on several factors including:
- Is the data mission critical?
- Is the data bound to an SLA? Will the loss of data result in financial liability?
- Does the data have a low rate of change? (the data changes infrequently such that losing data for a certain period of time is acceptable)
- is the data cost sensitive?
In conjunction with the estimated recovery time (EST) mentioned earlier, there are two other important factors to understand when considering the business continuity of your business.
- Recovery Time Objective (RTO) is the maximum acceptable time before the application fully recovers from a disruptive event
- Recovery Point Objective (RPO) is the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event
ASD automatically creates database backups at no additional charge. They occur straight out the box. You don't need to do anything to make them happen. Database backups are an essential part of any business continuity plan because they protect your data from accidental corruption or deletion. If you need to keep your backups longer than the default storage period, then you can configure a long-term backup retention policy. The default retention policy on the Basic tier is 7 days, whilst for the Standard and Premium tiers it is 35 days.
ASD creates full, differential and transaction log backups. The transaction log backups generally occur every 5 - 10 minutes, with the frequency based on the performance level and amount of database activity. Transaction log backups in conjunction with full or differential backups, allow you to restore to a specific point-in-time to the same server that hosts the database.
In addition to getting automated backups, I then configured Geo-Replication. Active Geo-Replication (AGR) enables you to configure readable secondary databases in the same or different data centre locations (or regions). Secondary databases are available for querying and for fail-over in the case of a data centre outage, or in the event of being unable to connect to the primary database. When you configure a secondary database, you give it a name and login credentials, as you would with any other database. This allows you to connect to a secondary database in exactly the same way as you would the primary (or any other) ASD. After a fail-over, the new primary has a different connection endpoint.
So in the event of a disruptive event that causes the outage of the data centre that hosts your ASD, you can automatically fail-over to a secondary database in a completely separate region. You are able to configure up to four of these secondary databases. You can initiate fail-over to any one of these secondary databases. Once fail-over is activated to one of your secondary databases, this then becomes the new primary database. All other linked secondary databases automatically link to the new primary. You can configure automatic fail-over or manual fail-over, whichever best suits the needs of the application and the business.
I haven't even scratched the surface of ASD and its business continuity features. I hope to return to this topic in a future article. As I've said before, everything about Azure is fantastically easy to use and configure (either through the Azure portal, Azure Powershell or the REST API), and this is certainly true with regard to its database features. If your data is important to you, then check out the features in Azure SQL Database.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As a Senior Software Engineer of many years experience, I am involved in every aspect of the life-cycle of a piece of software. From the design through to implementation and testing I get involved in every part of the creation and delivery of a software project.
A question that I am often asked by various colleagues is "What makes a Senior Software Engineer?". There is no single or simple answer to this question. I am sure that every Senior Software Engineer will answer this differently. They will consider depth and breadth of knowledge or years of service amongst other traits. Both of these are perfectly reasonable and sensible answers. I would say that it all boils down to one trait.
A Junior Software Engineer builds using frameworks and architectures. A Senior Software Engineer builds the frameworks and architectures.
I think this statement cuts to the core of the difference between Junior and Senior. A Junior will take the frameworks and architectures that are available to them, and build applications with them. A Senior will build the frameworks and architectures used by the Juniors. They enable the Juniors to do their day-to-day job by building the tools and providing the structure they need.
Where I currently work, we have developed a mobile app for the car fleet sector. The mobile app needs to consume various services to retrieve and / or update data. These services need to be highly secure, available and scalable. The services also needed to be consumed by web applications as well as the mobile app, so they need to be ubiquitous by all devices that are capable of using the HTTP protocol.
The final solution utilised a service bus architecture in conjunction with ASP.NET Web API. The service bus was bound to a web enabled listener which monitored new service requests as they were created, and routed the service request to the appropriate endpoint. The mobile app sent many different types of data to these services, so the services needed to be flexible enough to handle any type of incoming data, and be extensible enough so that additional data types could be added later downstream.
It should be obvious that creating such an architecture is beyond what a Junior would be capable of producing, which is why a Senior should instead be tasked with creating such an architecture. Only someone with sufficient knowledge, design and architectural skill would be capable of architecting, designing and implementing such a complex piece of software. There are many moving parts requiring a deep understanding of the system and its interactions with the various other components. Appropriate abstractions need to be created, coupled with suitable design patterns, base classes and structure.
When I first started out as a software developer all those years ago as a novice straight from university, such a challenge would have scared me half to death. I wouldn't have known where to start. Now I relish such challenges, and enjoy building the frameworks and architectures which are used by the rest of the team. It takes time to gain the requisite skills, knowledge and confidence. Over time, you are slowly able to create bigger and more complex software systems. From your first "Hello world" to building an entire framework or architecture takes many years of continued learning and mastery of your craft.
So to become a Senior you need to enable other software engineers. By enabling other software engineers is the path to becoming a truly great Senior Software Engineer.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from my previous article[^] where I described various qualities that, whilst may be absent from a job description, are nevertheless important and worthwhile trying to gauge in an interview scenario.
This article will describe a few of the common mistakes I have run into whilst interviewing candidates for the role of a software developer. The points I will raise could probably apply to any candidate interviewing for any role though, as they are quite general in nature.
- You had better have done some research on the company before turning up for the interview. I actually had a candidate turn up to an interview a few years ago who hadn't even looked at the company website. They knew nothing about the company or what we did. This is just plain rude. If a company is considering you for a role of employment, it doesn't take much effort to do some basic research. I always do this, it's courteous and shows a level of diligence and respect. You should always be able to answer the question "So what do you know about our company". If you can't, then go home.
- If you don't know the answer to a question, don't try to blag the answer. I don't tend to ask questions about syntax or such like as I think they are a waste of time. I tend to favour more open-ended questions that ask what experience you have on a particular technology, or what you understand by a particular term or concept e.g. what do you understand by Test Driven Development. If you don't know, it is far better to just say you don't know. Trying to blag the answer just leaves the interviewer with the impression that this is how you will approach your work if you were offered the role. That you would just blag your way through your projects within the business. This does NOT set a good impression.
- Rambling answers that don't really answer the question. Sometimes, if the candidate thinks they can answer the question, they will talk at great length and throw every buzzword into the answer that they can think of. So if the question was related to Test Driven Development, they might throw in Agile, Scrum and anything else that they think might give them brownie points. Keep your answers concise and on-topic. Giving a rambling answer that veers across many other topics and goes on for too long is not good for anyone. Use an example, give analogies, use your own experience. But just make sure you answer the question. And as with the previous point, a rambling answer does NOT set a good impression.
- Always have a few questions to ask. At the end of most interviews it is common for the interviewer to ask the candidate if they have any questions they would like to ask. It shows that you are interested in the role if you have a few of these, and especially if one of them relates to what has been discussed during the interview as it shows that you were paying attention. Don't ask questions about salary as this shows you may be more motivated by money than the role. Ask instead about current projects or challenges faced by the development team for example. You could then follow this up with how your own knowledge and skills could help with these.
I have been interviewed many times, and so fully understand how nerve wracking the experience can be. I have had to write code, solve puzzles, fix an application that contained various errors, undertake aptitude tests, been grilled by technically very capable developers straight out of university, been interviewed by heads of department, directors, and everything in between. Mastering the black art of the interview is far from easy, but by following a few simple rules of thumb you can improve your chances of grabbing that dream role.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When interviewing a candidate for a developer role, we all know that we need to find out their technical abilities, their level of knowledge and their goals. I think these can be taken as a given. But there are several other often over looked qualities that in my humble opinion are equally important.
Passion. Yes, we often hear about this one, but it's true. When I talk to a candidate I want to see them get genuinely excited about what they are talking about. I want to see the light come on behind their eyes and the fire igniting in their belly. They should be fully invested in what they do, and be looking to give it their all. What I don't want is a pedestrian 9 - 5 type person. Someone who thinks that putting in the required hours is sufficient.
Cares about what they do. Creating software (or indeed creating anything at all) requires a level of investment. It represents what you do, and how much you care about your craft. If my name is associated with something, I want it to be the best. It should be obvious to anyone looking at my code and the software that I have created, that I cared about it. I invested the time and energy to produce the best that I could in the time that I had. I didn't just throw something together, but instead that I crafted something that I could take pride in. If you don't take pride in what you do, then you can't care about it.
Going the extra mile. If you are passionate and care about what you do, then it should follow that you are willing to go the extra mile. That you are willing to make sacrifices to get the result that you want. This can be anything from reading up on a topic during your own time, getting into work early, leaving work a bit later or working through the occasional lunch. All of these things are sometimes necessary to ensure that you hit that deadline, that you meet that milestone.
I don't expect anyone to work long, silly hours or weekends. That's not what I'm saying. But I do expect someone to make the occasional sacrifice to bring a project in on time. If a project is slipping, then I would expect a developer to put in extra effort to try to pull it back. If they're not willing to make those sacrifices, then they don't really care about what they do. And more importantly, they don't really care about the rest of the team either. After all, a developer who works as part of a team, needs to consider how their input affects the output of the team. If they're not pulling their weight, then it's not just their own output that suffers, but that of the whole team.
I appreciate that these qualities are difficult to quantify and gauge during an interview, but I believe that they are important nonetheless. Unfortunately, it may take time to really gauge just how far someone meets these qualities. So whilst it's important to interview for the traditional abilities such as skill and knowledge, it's also important to gauge how invested and passionate they are, and how far they are willing to go to get the job done.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Have you ever been deep in a task, really focused and in the zone, only for someone to come along and say "Would you mind having a look at this problem for me please". This probably happens several times a day. And each time it happens, you lose time. You lose time while you try to get your head around the new issue you've been asked to look at, and you lose time again trying to retrace where you where previously so you can get yourself back in the zone on the original task.
The time it takes for you to re-focus back on the original task (after having already lost time looking at the problem you were interrupted for), is called thrashing. It takes time for the brain to get back into gear, and re-focus on what you were doing previously. You don't just switch from one task immediately to the next. The time it takes for your brain to get back to the same point it was before you were interrupted is thrashing, and is a constant cause of consternation.
Unfortunately, thrashing is inevitable. You are always going to be asked to look at other problems and issues, all whilst being deeply focused on your current task. But whilst it is inevitable, it can be reduced with a change of working culture.
At a previous company where I worked, the Development Team were only allowed to be interrupted in the afternoons. The mornings were off limits to all members of staff, except under exceptional circumstances. So basically, the developers were left alone in the mornings to get on with their work, allowing them to focus on their project work. In the afternoons, you were allowed to interrupt them to look at any other issues or problems that arose.
So if an issue was raised in the morning, the person would have to wait until the afternoon to raise it with the appropriate member of the Development Team.
Over the course of a typical day, thrashing can cost a developer 10, 20, 30 minutes. Over the period of a week, this can run into hours. It's not the time it takes to resolve the issue that is the problem, it's the time it takes for the developer to focus-re-focus-focus that is the problem. It's inevitable that the unexpected will arise, and things go wrong and break, and require the assistance of a member of the Development Team to resolve them. That's a given. However, to mitigate the impact this has on the developer, and reduce the cost of lost time to the business, it's surely far better to schedule these times.
This is better for the developer (as they can focus on their project work during set periods without interruption), and better for the business (as it reduces the time lost due to thrashing).
So to beat the thrashing, schedule periods of time when the Development Team are not interruptable, and periods when they can be interrupted. Simples.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When I implemented the original image storage functionality for the mobile app by developing an ASP.NET Web API service, I knew that ultimately I wanted this functionality to use Azure Blob Storage (ABS). We already use many other Azure services (Service Bus, Functions, WebJobs etc) and so it seemed a natural fit to also use Azure for storing the images sent from the mobile app.
Initially I wasn't sure how ABS would integrate with our Web API services from an architectural point-of-view. After some advice from some highly respected colleagues (you know who you are - Andy Deacon and Steve Evans), the simplest and most effective approach would be for the mobile app to upload the images to ABS, then pass the blob ID into the backend service as part of the message that is created by the mobile app (all form submission data that is sent from the mobile app is packaged up into a message object which contains all the user-entered information). So with this in mind, I began exploring how this could be achieved.
Unfortunately, due to the pressures of timescales, I didn't have sufficient time to implement a solution using ABS. I wasn't familiar enough with it, and needed to spend some time researching around it and getting to grips with it. Maybe go through some example code and read through the documentation.
Now that I've finally managed to get round to this, I've managed to develop a complete suite of ASP.NET Web API services for uploading, downloading, listing and deleting blobs from ABS. And yet again, I am very impressed by just how rich the API is for integrating our Web API services with Azure. Setting up and configuring the ABS containers was straight-forward. I created one for unit-testing and one for production. I added the ABS connection strings and container names to the web.config file (you don't want this hard-coded into your application code). I then created the necessary Web API controllers (and associated unit-tests) for allowing the mobile app to integrate with ABS.
The images are uploaded as serialised JSON objects (to enable the mobile app to consume the services), which are de-serialised by the Web API controllers. Once de-serialised into a type that is capable of integrating with ABS (such as a file stream), the necessary ABS API methods are invoked.
As I have come to expect from Azure, all of this functionality works seamlessly with the .NET ecosystem. The infrastructure for integrating with ABS is now code complete. All that is left now is to make the necessary changes to the mobile app to support these new services. These will be rolled out when we begin working on the new version of the mobile app (timescales TBD).
Azure is one of the best development platforms I have used in a long while. It's extremely powerful, has full support with Visual Studio and the .NET ecosystem, and is easy to setup and configure.
If you're building high volume enterprise applications which need to be scalable and available, then Azure is definitely worth a look.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This is the second multi-platform app that I have developed during the last 12 months. The apps have been developed using the cross-platform development environment Telerik Platform in conjunction with Apache Cordova and Kendo UI. They have been published to both Android and Apple stores.
All well and good.
In testing the app, several problems and defects were discovered. Some required additional development resource and were genuine defects in the code, but the majority were down to inconsistencies in the behaviour of Apple devices. That is to say, we discovered many problems during the testing cycle where the problem only applied to Apple, and not Android. In fact, we didn't discover a single unique fault on the Android platform at all.
Everything about Apple is convoluted, cumbersome and far more difficult than it needs to be. Contrast this with Android, which just works. From setting up the development accounts, to setting up the testing environment, to provisioning the metadata for testing, to making specific amendments to the app to cater for Apple only (such as the issues we found with the way Apple handles local database storage, or the way it handles UI interaction), the entire platform is a headache to work with as a developer.
If this was any other platform, I wouldn't work with it. I'm a developer, and my job is to create software, not to have to wrestle with the idiosyncrasies of a particular mobile platform that won't play by the rules, and insists on creating its own rules instead. It's like having to deal with a petulant teenager, rather than a mature adult. I'm quite surprised that the Apple platform exhibits so many idiosyncrasies when it should be a stable and mature platform by now.
At least Android works, that at least is something.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This is a scenario[^] I have discussed previously when discussing code coverage[^]. When you have two units of software, such as two functions, that work as expected when unit tested independently. The problem arises when those two functions interact with each other. This can sometimes produce entirely unexpected results. And this underscores the reason why integration tests are every bit as vital to the production of high quality software as having unit tests.
Without having integration tests, you won't see find out how the various pieces of software interact until they find their way into the end product. In which case you had better hope and pray your testing team finds them first. If they don't, then you can bet your last dollar that your customers will, and that is the worst outcome of all.
The build for our ASP.NET Web API services has over 200 unit tests, but also many integration tests that ensure that the various pieces all work together. This is why having 100% code coverage is not enough. Testing the various pieces of software in isolation is not sufficient. You also need tests that will mimic how the functionality is invoked by the end user within the end product. If you don't have such tests, then your testing coverage is quite simply inadequate.
When developing a new piece of software, you need to be mindful of how you intend to test it. This should not be an after thought, but something that you are conscious of during the entire life-cycle of the new piece of software. If you are using a TDD approach, then this will form part of your process of software development. It is usually more difficult to retro-fit a unit testing framework around your code after it has been written, than to do so from the very beginning. Even if not using a TDD approach, if you are in the habit of writing well designed software that adheres to the SOLID principles of software development, then applying a unit testing framework should not provide many obstacles.
So by all means, have as many unit tests as your application requires, but also be mindful of how the various pieces of software will ultimately interact with each other in the real world when used by the end users.
Remember, If you’re not writing unit tests, you’re doing it wrong.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I wrote an article [^] recently about creating a strong development team. Complementary to that article I think it's also important to build a team that strives for success. A team that wants to be the best. Where excellence is the determining factor in a project's success. A strong team of developers who are striving to create the best solutions is capable of anything.
Certain individuals are content with muddling along without ever really breaking into a sweat. They get the job done but will never set the world alight or go out with all guns blazing. They are happy to be mediocre. Close is good enough. The definition of success for these people is "It works".
I call this the Mediocre Mindset. They have low expectations and standards, and aren't willing to put in the extra effort to create something really exciting and breathtaking. They don't invest in themselves and don't put in the effort to keep their skills up-to-date. They are happy using that technology from years ago. It keeps them ticking over and that's good enough.
These people don't push any boundaries, challenge the status quo, think outside the box or put in extra effort to achieve a goal. They don't pull out all the stops and give it their all to meet a deadline. Accepting mediocrity as the standard for success will ultimately harm the business. It won't take much for your competitors to beat you squarely when your goal is "It works".
I would much rather have someone constantly questioning me, pushing me, challenging me. It is well known that in many sports the key to getting better is to participate with someone who is better than you. As a cyclist I know this only too well. If you cycle with people who ride at the same pace as you, you will simply continue to ride at the same pace. You won't get any faster. If you cycle with people who ride faster than you, then you'll get faster as you'll be forced to keep up with them. You may struggle at first, and it may take several weeks / months of hard effort and training, but eventually, you will be able to keep up with the faster riders. The improvements can be made if you have the desire to make them.
This same analogy applies equally well to software development (and probably most areas ofhuman endeavour).
Surround yourself with people who won't accept anything less than the best as the definition of success. People who will strive to create the best solutions, will invest their time and energies researching new and emerging technologies, who propose new and exciting solutions and bring fresh ideas to the table. I want to see fire in someone's belly. I want to see their eyes light up when talking about a project.
Is there a better way to create that application? How can that legacy application be improved? How can we speed up that process? Can that manual task be automated in any way? These are people who are constantly looking for ways to improve the working environment, processes, tools and technologies.
What I cannot bear to hear is "Well that's the way it's always worked". As if that was somehow a sufficient explanation for never improving anything. By the same argument, why bother to drive to work, when you could get a horse and cart. After all, that works too right? The difference of course, is that one can make the same journey in much less time than the other. If time isn't a factor, then by all means, use a horse and cart to get to that meeting.
If you have individuals who meet the definition of the Mediocre Mindset then try pushing them, challenging them. See how they respond. Maybe they have never truly been challenged and therefore adopted and cultivated an attitude of low expectation. By pushing them and challenging them they may respond accordingly and rise up to the challenges you are giving them. In which case you have successfully raised them up from mediocrity. If they don't respond, then you may be in trouble. Maybe they need smaller challenges and more gentle pushing.
I believe that everyone can improve themselves. Everyone can push that little bit harder. Meet ever greater challenges. Whilst some people may already be highly responsive to such an environment, even those that are totally new can become supreme advocates if coaxed and coached in the right manner.
Given the right encouragement and positive feedback, people can become inspired to achieve greater goals beyond their normal expectations.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This is something I have seen come up in conversations a few times in online forums and discussions. How much code coverage is enough? There isn't a simple, straight-forward answer to this question though.
Ideally you would be aiming for 100% code coverage, such that every line of code in the code-base is exercised by at least one unit test. But line coverage is not the only code coverage measurement.
I recently ran into an issue where a particular function was failing. I was surprised as the function was covered by several unit tests, and so I would have thought that any problems with the function would have been picked up by one or more of the unit tests. After some investigation I soon discovered that the problem was the result of the function being invoked with arguments that were causing it to fail. Whilst the arguments were perfectly valid, they were in a format that the function wasn't expecting.
Simply put, the output from the first function was the input to the second function. And whilst both functions were unit tested independently and both gave positive results, what was missing was a unit test where the first function invoked the second function. This unit test would have quickly highlighted the issue earlier.
After finding the issue, it was quickly fixed and subsequent unit tests have now been written to test for this particular scenario. So it's always important to be aware of how data flows through your application. It is not sufficient to unit test all the functions in isolation, when in reality there exists a network of inter-connected functions all invoking each other in different ways.
So by all means, unit test your data layer and ensure that it gives the correct output from the specified input. But you also need to be sure that your data layer gives the correct results when invoked from your business layer, and that your business layer gives the correct results when invoked from the user-interface layer.
Measuring your code coverage by line coverage is a blunt instrument. Knowing how those functions are invoked, and testing those scenarios is equally important. Basically, you need end-to-end coverage. Test your data layer is giving the correct results by invoking the user-interface layer, and tracing the execution path all the way through the application.
It's not the quantity of your unit tests that is important, but the quality of those unit tests.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Just as Fred Brooks recounted the Mythical Man-Month in his famous essay back in 1975, so this article will take up the equally mythical role of the Full Stack Developer. We've all seen the job adverts for the roles wanting the Full Stack Developer. That person who can craft visually stunning user-interfaces, write elegant, clean code, and build a highly scalable, lightning fast production database. Is it just me, or does this seem rather far fetched?
Much of this demand comes from companies wishing to reduce their costs, namely their staffing costs. Rather than advertise and employ for each of the roles separately, they try to minimise those costs by employing a single developer who can do all of them. The market then responds to this demand in the form of developers cross-skiling as best as they can into areas that they are not familiar with, not experienced at, or just plain have no interest in. But to get a job they are forced to assume the role of Full Stack Developer as that is what the market demands.
I don't care who you are, or how good you think you are. No one is equally adept at all of these skills. They are all fundamentally different. Yes, they may all be involved in creating a software application, but that's where the similarity ends. If you ever needed brain surgery, you probably wouldn't want the cardiologist to take over in the event the brain surgeon was ill. But why not? After all, aren't they all just different forms of medicine?
But this is precisely what people expect from software developers. Rather than understanding that these are all different areas of speciality, requiring different skills and knowledge, they are all lumped together into a general purpose skill set. That brilliant graphics designer who can create stunning user-interfaces, has to also cobble together a workable database. An area they have little interest or knowledge.
The Full Stack Developer is essentially a compromised role. For example, whilst the successful candidate may be a brilliant software developer, they may also have poor user-interface skills. And whilst they may get the job done and create an acceptable user-interface, it will lack the visual appeal and immediacy of a true specialist in the field. People use specialists all the time. When you visit the doctor and it turns out you require a consultant in a particular field, you will be referred to a specialist in that branch of medicine, and everyone is absolutely fine with that.
Yet for some unknown reason, the software industry is being driven by an insatiable demand for general purpose, jacks-of-all-trades. When in reality, what it really needs, is more specialists. Software is an increasingly diverse, complex, growing and specialised industry. It's an industry that covers the web, artificial intelligence, database technology, avionic software, mobile apps, the Internet of Things etc. It is a huge and ever expanding industry. As such, people tend to specialise into areas where they have an interest and are passionate. Just like in so many other industries.
If you hire a Full Stack Developer, don't be surprised if your applications come with weaknesses due to the weaknesses in the skill set of the person you hired to do the job. What's worse, those areas of weaknesses may be in areas you can't see e.g. the database or code. In which case, you have no idea how good or bad the application really is, until it fails that is.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In my previous article article[^] I described how I resolved a dependency problem between two different projects using NuGet. An assembly that was created by one project was required as a dependency by a different project. I resolved the dependency problem using NuGet. The build of the donor project created and published a package that was consumed by the build of the recipient project.
Although this all worked absolutely perfectly, the only slight drawback was that my solution relied on a network share as the location where the packages were published. This didn't seem very satisfactory. A better solution would be to publish the packages to an HTTP endpoint i.e. a web based package repository of some kind. The package repository needed to be private. The assemblies I needed to publish weren't intended for general purpose use, but only intended to be used by the other members of the development team in our own applications. So a public NuGet repository wasn't an option.
There are several ways of achieving this. Visual Studio Team Services (VSTS) package management, a private NuGet server or a third party package management service such as myget[^] all provide the ability to host your own private packages. The last option incurs costs (albeit fairly trivial costs unless you are scaling up), and we don't currently use VSTS services. So that left a private NuGet server as the proposed solution.
This works by creating your own ASP.NET web application, which you then host on your internal network or cloud infrastructure. The key feature in creating this web application is that you must add the NuGet.Server package to it. This allows the web application to serve as a package manager. A full description of how to create this web application can be found here[^].
It didn't take long before I had the web application up and running. The next step was to update both the donor build script (to publish the package to the newly created NuGet server), and the recipient build script (to install the package from the NuGet server).
There are two settings in the web.config file that are worth mentioning.
<add key="requireApiKey" value="true"/> - requiredApiKey - by default this is set to true. This means that you need to specify an API key when pushing / deleting packages. This ensures that only authorised applications (those you have entrusted with the private API key) can push and / or delete packages to your NuGet server. In our case, the NuGet server is hosted inside our firewall and only accessed by our build scripts. So we didn't require this functionality. So I set this value to false accordingly.
<add key="allowOverrideExistingPackageOnPush" value="false"/> - allowOverrideExistingPackageOnPush - by default this is set to false, indicating that you cannot push the same package to the NuGet server more than once. I ran into this behaviour when testing that the builds were correctly publishing the package. I was manually queuing builds and so ran into this behaviour as I was getting an error when attempting to publish the same package to the NuGet server. I'll probably reset it back to its default once it has been up and running for a while, but for now, it's set to true while I'm still testing it out and manually queuing builds.
Publishing your assemblies to a NuGet server simplifies your DevOps process considerably. Each time the donor build is triggered, a new version of the package is created and published to the NuGet server. It is then up to the recipient build to determine which version of the assembly (or package) it wishes to consume. There is no automatic pull to use the latest package by the recipient build. This is a manual intervention under the control of DevOps.
When the recipient build is ready to use the new version of an assembly, it will do so in a development environment first to make sure there are no breaking changes, and to undertake sufficient regression testing. This is also where unit tests come into their own. If all your unit tests pass with the new package, then you can be fairly confident that everything works. The caveat being, your level of confidence is directly proportional to the quality of your unit tests.
The NuGet server works exactly as expected, and each build publishes / installs packages to and from the NuGet server perfectly. Yet again, the integration between the Team Foundation Server (TFS) 2015 build scripts and the private NuGet server is as tight as you would expect from Microsoft. They just work, and that's all you can ask for.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 2-Mar-17 6:54am.
|
|
|
|
|
One of our .NET solutions recently needed to consume an assembly produced by one of our other solutions. So the output from one solution became an input into the other solution. At first I thought I'd simply add a build step to the Team Foundation Server 2015 (TFS2015) build that simply copied the file from one solution across to the other. But this didn't seem like a very good solution. For starters, the build would be copying a development version of the assembly which hadn't been properly tested (although the build had executed various unit tests against it). Also, this rather basic approach didn't allow any control over the version of the assembly that was consumed by the consuming solution.
A far better proposal was to use NuGet for this. After all, resolving dependencies in an structured manner is precisely what NuGet does. So I investigated how to achieve this. The basic process is for the donor solution to package and publish the assembly to a known location. The recipient solution then installs the assembly from this location.
So first off, I needed to add two additional build steps to the TFS2015 build process of the donor solution.
- NuGet Packager - creates the NuGet package from the specified project
- NuGet Publisher - publishes the NuGet package to the specified location (in my case I published the NuGet package to a network share)
This was the easy bit, and I got this working pretty quickly. After a few test builds I was happy that the donor build was publishing the NuGet package and versioning it to the specified location.
The second part was to add a build step to the recipient solution which would install the NuGet package from this location.
- NuGet Installer - installs the NuGet package into the location specified (the recipient solution's package folder).
This part proved to be a bit trickier as I wasn't sure what the correct way of doing this was. Do I create a single folder for all the NuGet packages? Or create a separate folder for each where the folder name contains the version number? I also wasn't sure what format the NuGet installer would be expecting. So I had to try various options, including changing the NuGet restore parameters, adding a NuGet.config file and updating the packages.config file. I opted to specify the NuGet source location and package directory as NuGet arguments on the NuGet Installer build step.
My NuGet arguments look something like this.
-verbosity detailed -source "\\network\share\nuget" -packagesdirectory "Solution\Main\packages" After a certain amount of trial and error, and reading through the online documentation for NuGet, I eventually managed to get the assembly to install in the recipient build.
It's definitely worth spending the time to figure out how NuGet works, as it provides a very good solution for handling assembly dependencies between your various solutions. And of course, the TFS2015 build system has excellent support for NuGet, so it works seamlessly within the Microsoft development ecosystem.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
After volunteering earlier in the year to document the coding standards for the development team, I have eventually managed to honour that commitment. It certainly hasn't been an easy task either. It wasn't until I started that I realised just what a substantial task I had volunteered myself for.
We have several applications for which we are responsible. These are mainly web and mobile applications. The primary programming languages are C# (for all the Web API RESTful services), VB.NET (for the legacy back-office enterprise application) and HTML / CSS / Javascript (for the mobile apps).
I focused primarily on documenting C# and Javascript. It is the long term goal to eventually re-write the legacy VB.NET web application, and move the team entirely over to C# development, so I didn't want to waste any effort documenting coding standards for a language that we will eventually stop using.
The most difficult part of drafting a coding standards document is to what level should you go? Too much detail and you risk stifling creativity by slavishly following the numerous coding standards to the letter. Too little detail and you risk having code that is inconsistent by having insufficient guidance as to what constitutes acceptable code.
So I tried to strike a balance between these two competing demands to create a document that allowed the developer to be creative whilst simultaneously giving enough guidance so as to create consistent code that conformed to best practice.
The document covered areas including naming conventions, layout and organisation, language features, best practices, architecture and design. The aim of any coding standards document is to bring consistency, so that code produced by developer A will look the same as that produced by developer B. Even though the actual code that either developer produces will be different, it should look the same in terms of the criteria mentioned above.
The document will be a perpetual work in progress. Rather than a static document that rarely gets updated, I'm aiming for a document that is fluid and can can (and should) be updated when necessary.
Another question that arose is who should own the coding standards document, and how should it get updated? Currently, yours truly owns the coding standards document, but it will be updated by consensus. If something contained within the coding standards document needs to be updated, then this will be agreed by the team as a whole, and not just enforced by a single individual.
The first draft of the coding standards document has now been released for the rest of the team to provide feedback, and any updates made as necessary. After that, the document will be put into production and updated thereafter whenever required.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As I near completion of the latest version of the mobile app I have been working on recently, I can take the time to reflect on the architectural challenges that I faced, and how I conquered them.
The mobile app was developed for the fleet management sector and was a complete re-write to the existing offering. There were many moving parts to the latest version of the mobile app. The mobile app allows users to send requests from their mobile devices so that the data they have submitted can be processed by the back-end line-of-business application. The mobile app allows users to send data that corresponds to booking an MOT or service, updating their mileage or completing an inspection for example. So the challenge was to devise an architecture that would guarantee this data would arrive at its destination, and that the architecture was capable of scaling to meet future demand, and that it was highly responsive. You don't want to be able to guarantee delivery of data if this becomes a time consuming process and gives the user the impression of a sluggish application. Conversely you don't want a highly responsive application which then cannot guarantee delivery of data or the data arrives corrupted.
Not an easy challenge by any stretch of the imagination.
To make matters even more difficult, the back-end line-of-business application is a legacy VB.NET application build around an equally legacy version of SQL Server. So I had to factor in these considerations from the outset as they are critical to the overall architecture.
The first decision was what technology to use to implement the services that would be required? Although I have used WCF (Windows Communication Foundation) extensively previously, we needed a technology that was built around HTTP and could easily consume JSON payloads. We also needed to be able to consume the services from the mobile apps which were implemented using Apache Cordova and Javascript. So the decision was made to go with ASP.NET WebAPI. This would allow us to build up the necessary suite of services using HTTP as the transport protocol (the clients would be mobile apps where HTTP is ubiquitous) and be able to exchange information using JSON. We used JSON rather than XML as the client application was implemented using Apache Cordova and Javascript. Naturally there is a far closer fit with JSON than XML when it comes to data exchange with Javascript.
All services required by the mobile app would be implemented using ASP.NET WebAPI and all data would be exchanged using JSON.
The next decision was where to host the WebAPI services? It was suggested (by yours truly) that we should look into using Azure for our hosting. Although we already had hosting with another supplier, it was agreed that we would use Azure for hosting as we were already looking into other areas of the Azure development platform. Although it is not strictly necessary to host your services on Azure to reap the benefits and have access to the many other services it has to offer, it's fair to say that they just work better if you do.
The infrastructure offered by Azure would be vastly superior to any we had in-house or with our other hosting supplier. I added a separate deployment for Azure to our TFS 2015 build process. After some initial configuration to allow the build process to access the Azure hosting environment, you are then good to go. This build process doesn't automatically deploy to Azure, as this is our production environment. Instead, deployments to Azure are triggered on an ad-hoc basis when needed.
The next challenge was how to guarantee that data sent from the mobile app would be received by the back-end line-of-business application? The levels of resilience needed by the app would require a service bus architecture. All messages sent from the mobile app would be added to an Azure Service Bus queue, where they could be subsequently picked up and processed. A service bus architecture has many advantages over traditional service delivery.
- Far higher degree of resilience
- The disconnected nature of a service bus means that you are not waiting for a response from the server (fire-and-forget)
- Able to process far higher loads
- Able to scale massively if neccesary
- You pay for what you use
- Azure Service Bus has excellent integration with the .NET ecosystem so can leverage it's services from a .NET application with ease
Plus many more.
So I implemented a WebAPI service that was capable of adding messages to the Azure Service Bus. Each time data was submitted from a mobile app it would invoke this service.
I next needed to decide how I would retrieve the messages that were placed on the Azure Service Bus. Although it is perfectly possible to write an application that can listen to the Azure Service Bus for incoming messages, it seemed a far better idea to make use of an Azure Function and bind it to the Azure Service Bus. Each time a message was added to the queue it would invoke the Azure Function. By implementing the listener application using an Azure Function it reduces the burden on our local infrastructure and guarantees to be available at all times.
The next big challenge was how to ensure the data received from the mobile app was in a meaningful state and could be processed by the back-end line-of-business application. All data sent from the mobile app contained only a fraction of the data needed for it to be processed by the back-end line-of-business application. It became necessary therefore to supplement the data for it to be of any use to the back-end line-of-business application.
This required the addition of a separate service that would take the bare-bones incoming data from the mobile app and supplement it with further data before writing it into the back-end line-of-business application database. The development of such a large, enterprise architecture was far from straight forward and had more than its fair share of challenges. But each challenge was met with a steely determination to which a perfect solution was found and developed. It is not easy trying to mentally unpack and unpick such a large, unwieldy and difficult set of circumstances and problems. Many times I had a take a step back or step away to give them due consideration. Architecture is a difficult enterprise, made even harder with so many moving elements and difficult challenges.
This project was certainly one of the most enjoyable I have worked on for a long time. It's given me great exposure to the Azure platform from a development perspective. It provided great exposure to service bus architecture and in particular Azure Service Bus. Getting to work on such a variety of problems, shiny technologies and architectural patterns was great fun and I enjoyed every minute of it.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|