|
Well it's been a long time coming, but the app that I've been working on has finally been released to the app stores. It was released to the Apple and Google app stores this morning. We'll inform our customers this week, and that's when we'll start to see the fruits of our labours (hopefully get some positive feedback).
The app has been developed using Telerik Platform in conjunction with web technologies such as HTML, CSS and Javascript. The front-end controls use Kendo UI and implement the MVVM pattern for binding to the respective Javascript properties. The app employs Apache Cordova for cross-platform developoment, allowing us to target multiple mobile platforms from the one codebase.
All functionality is served to the app via ASP.NET Web API RESTful services hosted on Azure. These Azure services employ an Azure Service Bus to ensure scalability and responsiveness to the user. The app itself contains no logic or functionality of its own (nor should it). All the business rules and data are served via services.
During the testing lifecycle, many issues were discovered and fixed. Some small, some not so small. The testing cycle took several months as many people from around the business were involved in the testing. And of course, there were the usual last minute changes to consider too (can we have that in blue).
The app itself has been developed for the Fleet Management sector and allows registered drivers to perform such tasks as updating their mileage, request a booking, MOT or service. They can submit vehicle inspections, contact their account manager, contact us in the event of a breakdown and many other driver related services.
But at long last, the app has hit the app stores and can be downloaded and installed. To use the app however, you need an account on our system, otherwise there will be no way for you to login. It's been a long time coming, but at last the app has been released (some people might say it escaped!).
Time to get on with other things now
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When you're driving down the street, you presumably always err on the side of caution. When waiting to exit a junction or roundabout, you wait until it's safe to pull out into the traffic, even if the approaching car may be indicating to turn into your junction. As you approach a junction and you see a car waiting to pull out, you instinctively keep a cautious eye open in case the car suddenly decides to pull out.
When you learn to drive a car, you are taught to drive defensively, and most people continue to drive this way throughout their lives. Expect the unexpected, never assume anything or take anything for granted. When we drive we assume that everyone around us can potentially and possibly make mistakes.
By the same token, Defensive Programming works along the same lines. Your code should always expect the unexpected, and never make any assumptions. This is the he(art) of Defensive Programming.
Wikipedia: Defensive programming is a form of defensive design intended to ensure the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety or security is needed.
Never trust input from a user or external application. Both of these are completely outside of the control of the programmer, and so therefore you have absolutely no control over what will be input. If you don't control the input, then you must assert for its validity.
Never assume the user will enter valid input. In fact, it is far safer to assume the exact opposite. Assume the user will enter complete garbage and ensure this garbage is rejected by the application with a suitable error message given. This already immediately makes the application more robust by protecting it against users entering garbage for input. If the user is supposed to enter a date, ensure that this is all they can enter. If the user is supposed to enter a number, ensure this is all they can enter. I'm sure you get the idea. The same rule applies to inputs coming from external systems. If your application integrates with another system, ensure that any inputs are stringently asserted beforehand.
Use
Assert() statements to ensure the inputs meet the expected types and values, and only proceed if they do. Otherwise throw a meaningful error. Be as strict as necessary. Only values that meet the exact format expected of the application should be allowed through. Everything else should be rejected. Instead of asserting for what is invalid, instead assert for what is valid, and process accordingly. There is probably a much longer list of ways that something can be wrong, than there is for ways in which it can be right. So it is probaby better to assert for those cases.
Before consuming a resource, ensure that the resource is valid. For example, when accessing data from a database connection, check that the database connection is valid and is open.
SqlConnection connection = new SqlConnection("myDatabaseConnectionString");
connection.Open();
SqlCommand cmd = new SqlCommand("myStoredProcedure", connection) When reading values from a database, don't assume that there is any data and don't assume that the data is valid. There may be no data or the values returned may be null for example.
SqlDataReader reader = cmd.ExecuteReader();
string firstName= reader["firstName"]; Don't assume that a variable contains a valid value, or that is has even been properly initialised.
MyMoneyClass money;
decimal funds = money.GetFunds(); Defensive Programming is a minsdset as much it is a programming paradigm. Assume nothing, assert everything, expect the unexpected.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 29-Jun-17 5:03am.
|
|
|
|
|
I'm celebrating my first year at Grosvenor as a Senior Software Engineer. The year has gone by so quickly that you don't often take the time to sit back and savour your accomplishments. I thought I would break this rule and do just that. So here is a (very) brief summary of just a few of the key projects I've been involved in during the previous year, and my first year at Grosvenor.
I've been involved with their mobile app development. This gave me my first introduction to Telerik Platform. This is a cross-platform mobile development platform that uses Apache Cordova. I've previously used Xamarin for such things, so it made a nice change to learn a new mobile platform technology. Under the covers, Telerik Platform uses web technology i.e. HTML, CSS and Javascript. The UI controls are built using the Kendo UI framework and implemement the MVVM pattern to bind the UI controls to the corresponding Javscript properties. Having previously used the MVC design pattern with ASP.NET is was nice to use a different pattern. I have to say that I found the MVVM pattern very simple and straight-forward to use.
I introduced DevOps using Team Foundation Server (TFS). I setup and configured builds for the key applications, implementing continuous integration and continuous deployment as part of these processes. There are different endpoints for development, staging and production, each with its own TFS deployment configuration. The build processes are quite complex, involving over a dozen separate build tasks. We now have uniform and consistent builds across all products across the business. Whereas previously these applications were built and deployed manually by a developer, now this process is entirely automated. This ensures consistency between builds, not to mention simplifying the process and reducing the manual burden on the developers.
I also introduced unit-testing into the software development life-cycle. This has been a major change to the way software is developed. Unit tests are used for both development as well as the build process. All new code must have associated unit tests, and these need to be checked in as part of the build process. The build process performs a code-coverage analysis, giving detailed reporting of the areas of code that are not covered by unit tests. The minimum code coverage is 70%. At the time of writing the code coverage across the application is over 90%.
None of these processes existed until I implemented them
When the decision was made to re-develop the mobile app offering, the key functionality driving the new proposition was to integrate the mobile app with the enterprise applications at the back-end. To achieve this required architecting an entire suite of ASP.NET Web API RESTful services utilising a service bus architecture. All the RESTful services consume and return data in JSON format. The architecture is highly scalable and available due in large part to the fact that it makes substantial use of many Azure services including hosting, service bus, functions, webjobs and a SQL database. As the implementer and architect of this process, I feel immensely proud.
I've thoroughly enjoyed the challenges, projects and the people I have worked with over the previous year and I hope the next year brings many more challenges.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I was recently reading an article on this subject which included feedback from other software architects. What was interesting was the lack of consensus on the topic. There were quite a few strong opinions raised on both sides of the discussion.
As a professional software architect, should you also write code? The argument goes, that if you aren't writing code, you become increasingly detached from the applications you are designing and architecting. This leads to the Architecture Astronaut[^] which was first coined by Joel Spolsky back in 2001. The Architecture Astronaut constantly tries to think in higher and higher (and increasingly less relevant) abstractions. The end result is that the role performed by those particular architects is redundant.
The counter argument is that by continuing to write code, you keep your development skills up-to-date and therefore maintain a greater degree of relevance. After all, to be a good software architect, you also need to know how to implement good software systems right?
I must say I'm quite divided on the subject. I definitely agree that as a software architect there is definitely merit to be gained from continuing to hone your development skills and ensure that these are kept up-to-date. However, is it necessary to write production code to do this? Keeping your skills relevant and up-to-date is one thing, but shipping production strength code is another.
A software architect doesn't write code in the same quantity as the software developer. This should be fairly obvious. If your primary function within the organisation is software architect, then you will naturally spend most of your time on architecture related activities. If your primary function is software developer, then you will spend most of your time on development related activities.
So it should come as no surprise that the software architect who specialises in architecture, should be better at architecture than they are as a developer. And conversely, the software developer should be better at development than they are at architecture.
So I would conclude that a software architect should most definitely keep their development skills relevant and current, but that this shouldn't necessarily involve writing code that is going to ship to a paying customer(s). I'm sure there are many less critical applications (such as internal applications) that would allow the software architect to keep their skills current, without compromising the quality and integrity of customer facing applications.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Critical to any business is its data. Data is king. So it's vitally important to ensure you have a plan to restore your data should anything untoward happen to it. From accidental user error, to application error, to an outage, fire or flood. There are many ways in which data can be lost or its integrity compromised.
So it's important to ensure you have regular backups, and that you perform regular restores of that data. After all, you don't want to find out after you have lost all your data, that there is a problem with your backup making it impossible to restore it.
If you are using Azure SQL Database (ASD) for your data storage, there are a range of options available to you. I won't go through all of them, as there are plenty of articles online already. I'll just describe the options I have chosen for our particular application and business needs.
ASD provides several business continuity features including automated backups and optional database replication. Each type of business continuity feature has different characteristics for estimated recovery time (ERT) and potential data loss for recent transactions. Understanding these ensures you can make an informed decision with regards to the needs of the business.
The business continuity needs of the business will depend on several factors including:
- Is the data mission critical?
- Is the data bound to an SLA? Will the loss of data result in financial liability?
- Does the data have a low rate of change? (the data changes infrequently such that losing data for a certain period of time is acceptable)
- is the data cost sensitive?
In conjunction with the estimated recovery time (EST) mentioned earlier, there are two other important factors to understand when considering the business continuity of your business.
- Recovery Time Objective (RTO) is the maximum acceptable time before the application fully recovers from a disruptive event
- Recovery Point Objective (RPO) is the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event
ASD automatically creates database backups at no additional charge. They occur straight out the box. You don't need to do anything to make them happen. Database backups are an essential part of any business continuity plan because they protect your data from accidental corruption or deletion. If you need to keep your backups longer than the default storage period, then you can configure a long-term backup retention policy. The default retention policy on the Basic tier is 7 days, whilst for the Standard and Premium tiers it is 35 days.
ASD creates full, differential and transaction log backups. The transaction log backups generally occur every 5 - 10 minutes, with the frequency based on the performance level and amount of database activity. Transaction log backups in conjunction with full or differential backups, allow you to restore to a specific point-in-time to the same server that hosts the database.
In addition to getting automated backups, I then configured Geo-Replication. Active Geo-Replication (AGR) enables you to configure readable secondary databases in the same or different data centre locations (or regions). Secondary databases are available for querying and for fail-over in the case of a data centre outage, or in the event of being unable to connect to the primary database. When you configure a secondary database, you give it a name and login credentials, as you would with any other database. This allows you to connect to a secondary database in exactly the same way as you would the primary (or any other) ASD. After a fail-over, the new primary has a different connection endpoint.
So in the event of a disruptive event that causes the outage of the data centre that hosts your ASD, you can automatically fail-over to a secondary database in a completely separate region. You are able to configure up to four of these secondary databases. You can initiate fail-over to any one of these secondary databases. Once fail-over is activated to one of your secondary databases, this then becomes the new primary database. All other linked secondary databases automatically link to the new primary. You can configure automatic fail-over or manual fail-over, whichever best suits the needs of the application and the business.
I haven't even scratched the surface of ASD and its business continuity features. I hope to return to this topic in a future article. As I've said before, everything about Azure is fantastically easy to use and configure (either through the Azure portal, Azure Powershell or the REST API), and this is certainly true with regard to its database features. If your data is important to you, then check out the features in Azure SQL Database.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As a Senior Software Engineer of many years experience, I am involved in every aspect of the life-cycle of a piece of software. From the design through to implementation and testing I get involved in every part of the creation and delivery of a software project.
A question that I am often asked by various colleagues is "What makes a Senior Software Engineer?". There is no single or simple answer to this question. I am sure that every Senior Software Engineer will answer this differently. They will consider depth and breadth of knowledge or years of service amongst other traits. Both of these are perfectly reasonable and sensible answers. I would say that it all boils down to one trait.
A Junior Software Engineer builds using frameworks and architectures. A Senior Software Engineer builds the frameworks and architectures.
I think this statement cuts to the core of the difference between Junior and Senior. A Junior will take the frameworks and architectures that are available to them, and build applications with them. A Senior will build the frameworks and architectures used by the Juniors. They enable the Juniors to do their day-to-day job by building the tools and providing the structure they need.
Where I currently work, we have developed a mobile app for the car fleet sector. The mobile app needs to consume various services to retrieve and / or update data. These services need to be highly secure, available and scalable. The services also needed to be consumed by web applications as well as the mobile app, so they need to be ubiquitous by all devices that are capable of using the HTTP protocol.
The final solution utilised a service bus architecture in conjunction with ASP.NET Web API. The service bus was bound to a web enabled listener which monitored new service requests as they were created, and routed the service request to the appropriate endpoint. The mobile app sent many different types of data to these services, so the services needed to be flexible enough to handle any type of incoming data, and be extensible enough so that additional data types could be added later downstream.
It should be obvious that creating such an architecture is beyond what a Junior would be capable of producing, which is why a Senior should instead be tasked with creating such an architecture. Only someone with sufficient knowledge, design and architectural skill would be capable of architecting, designing and implementing such a complex piece of software. There are many moving parts requiring a deep understanding of the system and its interactions with the various other components. Appropriate abstractions need to be created, coupled with suitable design patterns, base classes and structure.
When I first started out as a software developer all those years ago as a novice straight from university, such a challenge would have scared me half to death. I wouldn't have known where to start. Now I relish such challenges, and enjoy building the frameworks and architectures which are used by the rest of the team. It takes time to gain the requisite skills, knowledge and confidence. Over time, you are slowly able to create bigger and more complex software systems. From your first "Hello world" to building an entire framework or architecture takes many years of continued learning and mastery of your craft.
So to become a Senior you need to enable other software engineers. By enabling other software engineers is the path to becoming a truly great Senior Software Engineer.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Following on from my previous article[^] where I described various qualities that, whilst may be absent from a job description, are nevertheless important and worthwhile trying to gauge in an interview scenario.
This article will describe a few of the common mistakes I have run into whilst interviewing candidates for the role of a software developer. The points I will raise could probably apply to any candidate interviewing for any role though, as they are quite general in nature.
- You had better have done some research on the company before turning up for the interview. I actually had a candidate turn up to an interview a few years ago who hadn't even looked at the company website. They knew nothing about the company or what we did. This is just plain rude. If a company is considering you for a role of employment, it doesn't take much effort to do some basic research. I always do this, it's courteous and shows a level of diligence and respect. You should always be able to answer the question "So what do you know about our company". If you can't, then go home.
- If you don't know the answer to a question, don't try to blag the answer. I don't tend to ask questions about syntax or such like as I think they are a waste of time. I tend to favour more open-ended questions that ask what experience you have on a particular technology, or what you understand by a particular term or concept e.g. what do you understand by Test Driven Development. If you don't know, it is far better to just say you don't know. Trying to blag the answer just leaves the interviewer with the impression that this is how you will approach your work if you were offered the role. That you would just blag your way through your projects within the business. This does NOT set a good impression.
- Rambling answers that don't really answer the question. Sometimes, if the candidate thinks they can answer the question, they will talk at great length and throw every buzzword into the answer that they can think of. So if the question was related to Test Driven Development, they might throw in Agile, Scrum and anything else that they think might give them brownie points. Keep your answers concise and on-topic. Giving a rambling answer that veers across many other topics and goes on for too long is not good for anyone. Use an example, give analogies, use your own experience. But just make sure you answer the question. And as with the previous point, a rambling answer does NOT set a good impression.
- Always have a few questions to ask. At the end of most interviews it is common for the interviewer to ask the candidate if they have any questions they would like to ask. It shows that you are interested in the role if you have a few of these, and especially if one of them relates to what has been discussed during the interview as it shows that you were paying attention. Don't ask questions about salary as this shows you may be more motivated by money than the role. Ask instead about current projects or challenges faced by the development team for example. You could then follow this up with how your own knowledge and skills could help with these.
I have been interviewed many times, and so fully understand how nerve wracking the experience can be. I have had to write code, solve puzzles, fix an application that contained various errors, undertake aptitude tests, been grilled by technically very capable developers straight out of university, been interviewed by heads of department, directors, and everything in between. Mastering the black art of the interview is far from easy, but by following a few simple rules of thumb you can improve your chances of grabbing that dream role.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When interviewing a candidate for a developer role, we all know that we need to find out their technical abilities, their level of knowledge and their goals. I think these can be taken as a given. But there are several other often over looked qualities that in my humble opinion are equally important.
Passion. Yes, we often hear about this one, but it's true. When I talk to a candidate I want to see them get genuinely excited about what they are talking about. I want to see the light come on behind their eyes and the fire igniting in their belly. They should be fully invested in what they do, and be looking to give it their all. What I don't want is a pedestrian 9 - 5 type person. Someone who thinks that putting in the required hours is sufficient.
Cares about what they do. Creating software (or indeed creating anything at all) requires a level of investment. It represents what you do, and how much you care about your craft. If my name is associated with something, I want it to be the best. It should be obvious to anyone looking at my code and the software that I have created, that I cared about it. I invested the time and energy to produce the best that I could in the time that I had. I didn't just throw something together, but instead that I crafted something that I could take pride in. If you don't take pride in what you do, then you can't care about it.
Going the extra mile. If you are passionate and care about what you do, then it should follow that you are willing to go the extra mile. That you are willing to make sacrifices to get the result that you want. This can be anything from reading up on a topic during your own time, getting into work early, leaving work a bit later or working through the occasional lunch. All of these things are sometimes necessary to ensure that you hit that deadline, that you meet that milestone.
I don't expect anyone to work long, silly hours or weekends. That's not what I'm saying. But I do expect someone to make the occasional sacrifice to bring a project in on time. If a project is slipping, then I would expect a developer to put in extra effort to try to pull it back. If they're not willing to make those sacrifices, then they don't really care about what they do. And more importantly, they don't really care about the rest of the team either. After all, a developer who works as part of a team, needs to consider how their input affects the output of the team. If they're not pulling their weight, then it's not just their own output that suffers, but that of the whole team.
I appreciate that these qualities are difficult to quantify and gauge during an interview, but I believe that they are important nonetheless. Unfortunately, it may take time to really gauge just how far someone meets these qualities. So whilst it's important to interview for the traditional abilities such as skill and knowledge, it's also important to gauge how invested and passionate they are, and how far they are willing to go to get the job done.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Have you ever been deep in a task, really focused and in the zone, only for someone to come along and say "Would you mind having a look at this problem for me please". This probably happens several times a day. And each time it happens, you lose time. You lose time while you try to get your head around the new issue you've been asked to look at, and you lose time again trying to retrace where you where previously so you can get yourself back in the zone on the original task.
The time it takes for you to re-focus back on the original task (after having already lost time looking at the problem you were interrupted for), is called thrashing. It takes time for the brain to get back into gear, and re-focus on what you were doing previously. You don't just switch from one task immediately to the next. The time it takes for your brain to get back to the same point it was before you were interrupted is thrashing, and is a constant cause of consternation.
Unfortunately, thrashing is inevitable. You are always going to be asked to look at other problems and issues, all whilst being deeply focused on your current task. But whilst it is inevitable, it can be reduced with a change of working culture.
At a previous company where I worked, the Development Team were only allowed to be interrupted in the afternoons. The mornings were off limits to all members of staff, except under exceptional circumstances. So basically, the developers were left alone in the mornings to get on with their work, allowing them to focus on their project work. In the afternoons, you were allowed to interrupt them to look at any other issues or problems that arose.
So if an issue was raised in the morning, the person would have to wait until the afternoon to raise it with the appropriate member of the Development Team.
Over the course of a typical day, thrashing can cost a developer 10, 20, 30 minutes. Over the period of a week, this can run into hours. It's not the time it takes to resolve the issue that is the problem, it's the time it takes for the developer to focus-re-focus-focus that is the problem. It's inevitable that the unexpected will arise, and things go wrong and break, and require the assistance of a member of the Development Team to resolve them. That's a given. However, to mitigate the impact this has on the developer, and reduce the cost of lost time to the business, it's surely far better to schedule these times.
This is better for the developer (as they can focus on their project work during set periods without interruption), and better for the business (as it reduces the time lost due to thrashing).
So to beat the thrashing, schedule periods of time when the Development Team are not interruptable, and periods when they can be interrupted. Simples.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
When I implemented the original image storage functionality for the mobile app by developing an ASP.NET Web API service, I knew that ultimately I wanted this functionality to use Azure Blob Storage (ABS). We already use many other Azure services (Service Bus, Functions, WebJobs etc) and so it seemed a natural fit to also use Azure for storing the images sent from the mobile app.
Initially I wasn't sure how ABS would integrate with our Web API services from an architectural point-of-view. After some advice from some highly respected colleagues (you know who you are - Andy Deacon and Steve Evans), the simplest and most effective approach would be for the mobile app to upload the images to ABS, then pass the blob ID into the backend service as part of the message that is created by the mobile app (all form submission data that is sent from the mobile app is packaged up into a message object which contains all the user-entered information). So with this in mind, I began exploring how this could be achieved.
Unfortunately, due to the pressures of timescales, I didn't have sufficient time to implement a solution using ABS. I wasn't familiar enough with it, and needed to spend some time researching around it and getting to grips with it. Maybe go through some example code and read through the documentation.
Now that I've finally managed to get round to this, I've managed to develop a complete suite of ASP.NET Web API services for uploading, downloading, listing and deleting blobs from ABS. And yet again, I am very impressed by just how rich the API is for integrating our Web API services with Azure. Setting up and configuring the ABS containers was straight-forward. I created one for unit-testing and one for production. I added the ABS connection strings and container names to the web.config file (you don't want this hard-coded into your application code). I then created the necessary Web API controllers (and associated unit-tests) for allowing the mobile app to integrate with ABS.
The images are uploaded as serialised JSON objects (to enable the mobile app to consume the services), which are de-serialised by the Web API controllers. Once de-serialised into a type that is capable of integrating with ABS (such as a file stream), the necessary ABS API methods are invoked.
As I have come to expect from Azure, all of this functionality works seamlessly with the .NET ecosystem. The infrastructure for integrating with ABS is now code complete. All that is left now is to make the necessary changes to the mobile app to support these new services. These will be rolled out when we begin working on the new version of the mobile app (timescales TBD).
Azure is one of the best development platforms I have used in a long while. It's extremely powerful, has full support with Visual Studio and the .NET ecosystem, and is easy to setup and configure.
If you're building high volume enterprise applications which need to be scalable and available, then Azure is definitely worth a look.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This is the second multi-platform app that I have developed during the last 12 months. The apps have been developed using the cross-platform development environment Telerik Platform in conjunction with Apache Cordova and Kendo UI. They have been published to both Android and Apple stores.
All well and good.
In testing the app, several problems and defects were discovered. Some required additional development resource and were genuine defects in the code, but the majority were down to inconsistencies in the behaviour of Apple devices. That is to say, we discovered many problems during the testing cycle where the problem only applied to Apple, and not Android. In fact, we didn't discover a single unique fault on the Android platform at all.
Everything about Apple is convoluted, cumbersome and far more difficult than it needs to be. Contrast this with Android, which just works. From setting up the development accounts, to setting up the testing environment, to provisioning the metadata for testing, to making specific amendments to the app to cater for Apple only (such as the issues we found with the way Apple handles local database storage, or the way it handles UI interaction), the entire platform is a headache to work with as a developer.
If this was any other platform, I wouldn't work with it. I'm a developer, and my job is to create software, not to have to wrestle with the idiosyncrasies of a particular mobile platform that won't play by the rules, and insists on creating its own rules instead. It's like having to deal with a petulant teenager, rather than a mature adult. I'm quite surprised that the Apple platform exhibits so many idiosyncrasies when it should be a stable and mature platform by now.
At least Android works, that at least is something.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This is a scenario[^] I have discussed previously when discussing code coverage[^]. When you have two units of software, such as two functions, that work as expected when unit tested independently. The problem arises when those two functions interact with each other. This can sometimes produce entirely unexpected results. And this underscores the reason why integration tests are every bit as vital to the production of high quality software as having unit tests.
Without having integration tests, you won't see find out how the various pieces of software interact until they find their way into the end product. In which case you had better hope and pray your testing team finds them first. If they don't, then you can bet your last dollar that your customers will, and that is the worst outcome of all.
The build for our ASP.NET Web API services has over 200 unit tests, but also many integration tests that ensure that the various pieces all work together. This is why having 100% code coverage is not enough. Testing the various pieces of software in isolation is not sufficient. You also need tests that will mimic how the functionality is invoked by the end user within the end product. If you don't have such tests, then your testing coverage is quite simply inadequate.
When developing a new piece of software, you need to be mindful of how you intend to test it. This should not be an after thought, but something that you are conscious of during the entire life-cycle of the new piece of software. If you are using a TDD approach, then this will form part of your process of software development. It is usually more difficult to retro-fit a unit testing framework around your code after it has been written, than to do so from the very beginning. Even if not using a TDD approach, if you are in the habit of writing well designed software that adheres to the SOLID principles of software development, then applying a unit testing framework should not provide many obstacles.
So by all means, have as many unit tests as your application requires, but also be mindful of how the various pieces of software will ultimately interact with each other in the real world when used by the end users.
Remember, If you’re not writing unit tests, you’re doing it wrong.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I wrote an article [^] recently about creating a strong development team. Complementary to that article I think it's also important to build a team that strives for success. A team that wants to be the best. Where excellence is the determining factor in a project's success. A strong team of developers who are striving to create the best solutions is capable of anything.
Certain individuals are content with muddling along without ever really breaking into a sweat. They get the job done but will never set the world alight or go out with all guns blazing. They are happy to be mediocre. Close is good enough. The definition of success for these people is "It works".
I call this the Mediocre Mindset. They have low expectations and standards, and aren't willing to put in the extra effort to create something really exciting and breathtaking. They don't invest in themselves and don't put in the effort to keep their skills up-to-date. They are happy using that technology from years ago. It keeps them ticking over and that's good enough.
These people don't push any boundaries, challenge the status quo, think outside the box or put in extra effort to achieve a goal. They don't pull out all the stops and give it their all to meet a deadline. Accepting mediocrity as the standard for success will ultimately harm the business. It won't take much for your competitors to beat you squarely when your goal is "It works".
I would much rather have someone constantly questioning me, pushing me, challenging me. It is well known that in many sports the key to getting better is to participate with someone who is better than you. As a cyclist I know this only too well. If you cycle with people who ride at the same pace as you, you will simply continue to ride at the same pace. You won't get any faster. If you cycle with people who ride faster than you, then you'll get faster as you'll be forced to keep up with them. You may struggle at first, and it may take several weeks / months of hard effort and training, but eventually, you will be able to keep up with the faster riders. The improvements can be made if you have the desire to make them.
This same analogy applies equally well to software development (and probably most areas ofhuman endeavour).
Surround yourself with people who won't accept anything less than the best as the definition of success. People who will strive to create the best solutions, will invest their time and energies researching new and emerging technologies, who propose new and exciting solutions and bring fresh ideas to the table. I want to see fire in someone's belly. I want to see their eyes light up when talking about a project.
Is there a better way to create that application? How can that legacy application be improved? How can we speed up that process? Can that manual task be automated in any way? These are people who are constantly looking for ways to improve the working environment, processes, tools and technologies.
What I cannot bear to hear is "Well that's the way it's always worked". As if that was somehow a sufficient explanation for never improving anything. By the same argument, why bother to drive to work, when you could get a horse and cart. After all, that works too right? The difference of course, is that one can make the same journey in much less time than the other. If time isn't a factor, then by all means, use a horse and cart to get to that meeting.
If you have individuals who meet the definition of the Mediocre Mindset then try pushing them, challenging them. See how they respond. Maybe they have never truly been challenged and therefore adopted and cultivated an attitude of low expectation. By pushing them and challenging them they may respond accordingly and rise up to the challenges you are giving them. In which case you have successfully raised them up from mediocrity. If they don't respond, then you may be in trouble. Maybe they need smaller challenges and more gentle pushing.
I believe that everyone can improve themselves. Everyone can push that little bit harder. Meet ever greater challenges. Whilst some people may already be highly responsive to such an environment, even those that are totally new can become supreme advocates if coaxed and coached in the right manner.
Given the right encouragement and positive feedback, people can become inspired to achieve greater goals beyond their normal expectations.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
This is something I have seen come up in conversations a few times in online forums and discussions. How much code coverage is enough? There isn't a simple, straight-forward answer to this question though.
Ideally you would be aiming for 100% code coverage, such that every line of code in the code-base is exercised by at least one unit test. But line coverage is not the only code coverage measurement.
I recently ran into an issue where a particular function was failing. I was surprised as the function was covered by several unit tests, and so I would have thought that any problems with the function would have been picked up by one or more of the unit tests. After some investigation I soon discovered that the problem was the result of the function being invoked with arguments that were causing it to fail. Whilst the arguments were perfectly valid, they were in a format that the function wasn't expecting.
Simply put, the output from the first function was the input to the second function. And whilst both functions were unit tested independently and both gave positive results, what was missing was a unit test where the first function invoked the second function. This unit test would have quickly highlighted the issue earlier.
After finding the issue, it was quickly fixed and subsequent unit tests have now been written to test for this particular scenario. So it's always important to be aware of how data flows through your application. It is not sufficient to unit test all the functions in isolation, when in reality there exists a network of inter-connected functions all invoking each other in different ways.
So by all means, unit test your data layer and ensure that it gives the correct output from the specified input. But you also need to be sure that your data layer gives the correct results when invoked from your business layer, and that your business layer gives the correct results when invoked from the user-interface layer.
Measuring your code coverage by line coverage is a blunt instrument. Knowing how those functions are invoked, and testing those scenarios is equally important. Basically, you need end-to-end coverage. Test your data layer is giving the correct results by invoking the user-interface layer, and tracing the execution path all the way through the application.
It's not the quantity of your unit tests that is important, but the quality of those unit tests.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Just as Fred Brooks recounted the Mythical Man-Month in his famous essay back in 1975, so this article will take up the equally mythical role of the Full Stack Developer. We've all seen the job adverts for the roles wanting the Full Stack Developer. That person who can craft visually stunning user-interfaces, write elegant, clean code, and build a highly scalable, lightning fast production database. Is it just me, or does this seem rather far fetched?
Much of this demand comes from companies wishing to reduce their costs, namely their staffing costs. Rather than advertise and employ for each of the roles separately, they try to minimise those costs by employing a single developer who can do all of them. The market then responds to this demand in the form of developers cross-skiling as best as they can into areas that they are not familiar with, not experienced at, or just plain have no interest in. But to get a job they are forced to assume the role of Full Stack Developer as that is what the market demands.
I don't care who you are, or how good you think you are. No one is equally adept at all of these skills. They are all fundamentally different. Yes, they may all be involved in creating a software application, but that's where the similarity ends. If you ever needed brain surgery, you probably wouldn't want the cardiologist to take over in the event the brain surgeon was ill. But why not? After all, aren't they all just different forms of medicine?
But this is precisely what people expect from software developers. Rather than understanding that these are all different areas of speciality, requiring different skills and knowledge, they are all lumped together into a general purpose skill set. That brilliant graphics designer who can create stunning user-interfaces, has to also cobble together a workable database. An area they have little interest or knowledge.
The Full Stack Developer is essentially a compromised role. For example, whilst the successful candidate may be a brilliant software developer, they may also have poor user-interface skills. And whilst they may get the job done and create an acceptable user-interface, it will lack the visual appeal and immediacy of a true specialist in the field. People use specialists all the time. When you visit the doctor and it turns out you require a consultant in a particular field, you will be referred to a specialist in that branch of medicine, and everyone is absolutely fine with that.
Yet for some unknown reason, the software industry is being driven by an insatiable demand for general purpose, jacks-of-all-trades. When in reality, what it really needs, is more specialists. Software is an increasingly diverse, complex, growing and specialised industry. It's an industry that covers the web, artificial intelligence, database technology, avionic software, mobile apps, the Internet of Things etc. It is a huge and ever expanding industry. As such, people tend to specialise into areas where they have an interest and are passionate. Just like in so many other industries.
If you hire a Full Stack Developer, don't be surprised if your applications come with weaknesses due to the weaknesses in the skill set of the person you hired to do the job. What's worse, those areas of weaknesses may be in areas you can't see e.g. the database or code. In which case, you have no idea how good or bad the application really is, until it fails that is.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In my previous article article[^] I described how I resolved a dependency problem between two different projects using NuGet. An assembly that was created by one project was required as a dependency by a different project. I resolved the dependency problem using NuGet. The build of the donor project created and published a package that was consumed by the build of the recipient project.
Although this all worked absolutely perfectly, the only slight drawback was that my solution relied on a network share as the location where the packages were published. This didn't seem very satisfactory. A better solution would be to publish the packages to an HTTP endpoint i.e. a web based package repository of some kind. The package repository needed to be private. The assemblies I needed to publish weren't intended for general purpose use, but only intended to be used by the other members of the development team in our own applications. So a public NuGet repository wasn't an option.
There are several ways of achieving this. Visual Studio Team Services (VSTS) package management, a private NuGet server or a third party package management service such as myget[^] all provide the ability to host your own private packages. The last option incurs costs (albeit fairly trivial costs unless you are scaling up), and we don't currently use VSTS services. So that left a private NuGet server as the proposed solution.
This works by creating your own ASP.NET web application, which you then host on your internal network or cloud infrastructure. The key feature in creating this web application is that you must add the NuGet.Server package to it. This allows the web application to serve as a package manager. A full description of how to create this web application can be found here[^].
It didn't take long before I had the web application up and running. The next step was to update both the donor build script (to publish the package to the newly created NuGet server), and the recipient build script (to install the package from the NuGet server).
There are two settings in the web.config file that are worth mentioning.
<add key="requireApiKey" value="true"/> - requiredApiKey - by default this is set to true. This means that you need to specify an API key when pushing / deleting packages. This ensures that only authorised applications (those you have entrusted with the private API key) can push and / or delete packages to your NuGet server. In our case, the NuGet server is hosted inside our firewall and only accessed by our build scripts. So we didn't require this functionality. So I set this value to false accordingly.
<add key="allowOverrideExistingPackageOnPush" value="false"/> - allowOverrideExistingPackageOnPush - by default this is set to false, indicating that you cannot push the same package to the NuGet server more than once. I ran into this behaviour when testing that the builds were correctly publishing the package. I was manually queuing builds and so ran into this behaviour as I was getting an error when attempting to publish the same package to the NuGet server. I'll probably reset it back to its default once it has been up and running for a while, but for now, it's set to true while I'm still testing it out and manually queuing builds.
Publishing your assemblies to a NuGet server simplifies your DevOps process considerably. Each time the donor build is triggered, a new version of the package is created and published to the NuGet server. It is then up to the recipient build to determine which version of the assembly (or package) it wishes to consume. There is no automatic pull to use the latest package by the recipient build. This is a manual intervention under the control of DevOps.
When the recipient build is ready to use the new version of an assembly, it will do so in a development environment first to make sure there are no breaking changes, and to undertake sufficient regression testing. This is also where unit tests come into their own. If all your unit tests pass with the new package, then you can be fairly confident that everything works. The caveat being, your level of confidence is directly proportional to the quality of your unit tests.
The NuGet server works exactly as expected, and each build publishes / installs packages to and from the NuGet server perfectly. Yet again, the integration between the Team Foundation Server (TFS) 2015 build scripts and the private NuGet server is as tight as you would expect from Microsoft. They just work, and that's all you can ask for.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
modified 2-Mar-17 6:54am.
|
|
|
|
|
One of our .NET solutions recently needed to consume an assembly produced by one of our other solutions. So the output from one solution became an input into the other solution. At first I thought I'd simply add a build step to the Team Foundation Server 2015 (TFS2015) build that simply copied the file from one solution across to the other. But this didn't seem like a very good solution. For starters, the build would be copying a development version of the assembly which hadn't been properly tested (although the build had executed various unit tests against it). Also, this rather basic approach didn't allow any control over the version of the assembly that was consumed by the consuming solution.
A far better proposal was to use NuGet for this. After all, resolving dependencies in an structured manner is precisely what NuGet does. So I investigated how to achieve this. The basic process is for the donor solution to package and publish the assembly to a known location. The recipient solution then installs the assembly from this location.
So first off, I needed to add two additional build steps to the TFS2015 build process of the donor solution.
- NuGet Packager - creates the NuGet package from the specified project
- NuGet Publisher - publishes the NuGet package to the specified location (in my case I published the NuGet package to a network share)
This was the easy bit, and I got this working pretty quickly. After a few test builds I was happy that the donor build was publishing the NuGet package and versioning it to the specified location.
The second part was to add a build step to the recipient solution which would install the NuGet package from this location.
- NuGet Installer - installs the NuGet package into the location specified (the recipient solution's package folder).
This part proved to be a bit trickier as I wasn't sure what the correct way of doing this was. Do I create a single folder for all the NuGet packages? Or create a separate folder for each where the folder name contains the version number? I also wasn't sure what format the NuGet installer would be expecting. So I had to try various options, including changing the NuGet restore parameters, adding a NuGet.config file and updating the packages.config file. I opted to specify the NuGet source location and package directory as NuGet arguments on the NuGet Installer build step.
My NuGet arguments look something like this.
-verbosity detailed -source "\\network\share\nuget" -packagesdirectory "Solution\Main\packages" After a certain amount of trial and error, and reading through the online documentation for NuGet, I eventually managed to get the assembly to install in the recipient build.
It's definitely worth spending the time to figure out how NuGet works, as it provides a very good solution for handling assembly dependencies between your various solutions. And of course, the TFS2015 build system has excellent support for NuGet, so it works seamlessly within the Microsoft development ecosystem.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
After volunteering earlier in the year to document the coding standards for the development team, I have eventually managed to honour that commitment. It certainly hasn't been an easy task either. It wasn't until I started that I realised just what a substantial task I had volunteered myself for.
We have several applications for which we are responsible. These are mainly web and mobile applications. The primary programming languages are C# (for all the Web API RESTful services), VB.NET (for the legacy back-office enterprise application) and HTML / CSS / Javascript (for the mobile apps).
I focused primarily on documenting C# and Javascript. It is the long term goal to eventually re-write the legacy VB.NET web application, and move the team entirely over to C# development, so I didn't want to waste any effort documenting coding standards for a language that we will eventually stop using.
The most difficult part of drafting a coding standards document is to what level should you go? Too much detail and you risk stifling creativity by slavishly following the numerous coding standards to the letter. Too little detail and you risk having code that is inconsistent by having insufficient guidance as to what constitutes acceptable code.
So I tried to strike a balance between these two competing demands to create a document that allowed the developer to be creative whilst simultaneously giving enough guidance so as to create consistent code that conformed to best practice.
The document covered areas including naming conventions, layout and organisation, language features, best practices, architecture and design. The aim of any coding standards document is to bring consistency, so that code produced by developer A will look the same as that produced by developer B. Even though the actual code that either developer produces will be different, it should look the same in terms of the criteria mentioned above.
The document will be a perpetual work in progress. Rather than a static document that rarely gets updated, I'm aiming for a document that is fluid and can can (and should) be updated when necessary.
Another question that arose is who should own the coding standards document, and how should it get updated? Currently, yours truly owns the coding standards document, but it will be updated by consensus. If something contained within the coding standards document needs to be updated, then this will be agreed by the team as a whole, and not just enforced by a single individual.
The first draft of the coding standards document has now been released for the rest of the team to provide feedback, and any updates made as necessary. After that, the document will be put into production and updated thereafter whenever required.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
As I near completion of the latest version of the mobile app I have been working on recently, I can take the time to reflect on the architectural challenges that I faced, and how I conquered them.
The mobile app was developed for the fleet management sector and was a complete re-write to the existing offering. There were many moving parts to the latest version of the mobile app. The mobile app allows users to send requests from their mobile devices so that the data they have submitted can be processed by the back-end line-of-business application. The mobile app allows users to send data that corresponds to booking an MOT or service, updating their mileage or completing an inspection for example. So the challenge was to devise an architecture that would guarantee this data would arrive at its destination, and that the architecture was capable of scaling to meet future demand, and that it was highly responsive. You don't want to be able to guarantee delivery of data if this becomes a time consuming process and gives the user the impression of a sluggish application. Conversely you don't want a highly responsive application which then cannot guarantee delivery of data or the data arrives corrupted.
Not an easy challenge by any stretch of the imagination.
To make matters even more difficult, the back-end line-of-business application is a legacy VB.NET application build around an equally legacy version of SQL Server. So I had to factor in these considerations from the outset as they are critical to the overall architecture.
The first decision was what technology to use to implement the services that would be required? Although I have used WCF (Windows Communication Foundation) extensively previously, we needed a technology that was built around HTTP and could easily consume JSON payloads. We also needed to be able to consume the services from the mobile apps which were implemented using Apache Cordova and Javascript. So the decision was made to go with ASP.NET WebAPI. This would allow us to build up the necessary suite of services using HTTP as the transport protocol (the clients would be mobile apps where HTTP is ubiquitous) and be able to exchange information using JSON. We used JSON rather than XML as the client application was implemented using Apache Cordova and Javascript. Naturally there is a far closer fit with JSON than XML when it comes to data exchange with Javascript.
All services required by the mobile app would be implemented using ASP.NET WebAPI and all data would be exchanged using JSON.
The next decision was where to host the WebAPI services? It was suggested (by yours truly) that we should look into using Azure for our hosting. Although we already had hosting with another supplier, it was agreed that we would use Azure for hosting as we were already looking into other areas of the Azure development platform. Although it is not strictly necessary to host your services on Azure to reap the benefits and have access to the many other services it has to offer, it's fair to say that they just work better if you do.
The infrastructure offered by Azure would be vastly superior to any we had in-house or with our other hosting supplier. I added a separate deployment for Azure to our TFS 2015 build process. After some initial configuration to allow the build process to access the Azure hosting environment, you are then good to go. This build process doesn't automatically deploy to Azure, as this is our production environment. Instead, deployments to Azure are triggered on an ad-hoc basis when needed.
The next challenge was how to guarantee that data sent from the mobile app would be received by the back-end line-of-business application? The levels of resilience needed by the app would require a service bus architecture. All messages sent from the mobile app would be added to an Azure Service Bus queue, where they could be subsequently picked up and processed. A service bus architecture has many advantages over traditional service delivery.
- Far higher degree of resilience
- The disconnected nature of a service bus means that you are not waiting for a response from the server (fire-and-forget)
- Able to process far higher loads
- Able to scale massively if neccesary
- You pay for what you use
- Azure Service Bus has excellent integration with the .NET ecosystem so can leverage it's services from a .NET application with ease
Plus many more.
So I implemented a WebAPI service that was capable of adding messages to the Azure Service Bus. Each time data was submitted from a mobile app it would invoke this service.
I next needed to decide how I would retrieve the messages that were placed on the Azure Service Bus. Although it is perfectly possible to write an application that can listen to the Azure Service Bus for incoming messages, it seemed a far better idea to make use of an Azure Function and bind it to the Azure Service Bus. Each time a message was added to the queue it would invoke the Azure Function. By implementing the listener application using an Azure Function it reduces the burden on our local infrastructure and guarantees to be available at all times.
The next big challenge was how to ensure the data received from the mobile app was in a meaningful state and could be processed by the back-end line-of-business application. All data sent from the mobile app contained only a fraction of the data needed for it to be processed by the back-end line-of-business application. It became necessary therefore to supplement the data for it to be of any use to the back-end line-of-business application.
This required the addition of a separate service that would take the bare-bones incoming data from the mobile app and supplement it with further data before writing it into the back-end line-of-business application database. The development of such a large, enterprise architecture was far from straight forward and had more than its fair share of challenges. But each challenge was met with a steely determination to which a perfect solution was found and developed. It is not easy trying to mentally unpack and unpick such a large, unwieldy and difficult set of circumstances and problems. Many times I had a take a step back or step away to give them due consideration. Architecture is a difficult enterprise, made even harder with so many moving elements and difficult challenges.
This project was certainly one of the most enjoyable I have worked on for a long time. It's given me great exposure to the Azure platform from a development perspective. It provided great exposure to service bus architecture and in particular Azure Service Bus. Getting to work on such a variety of problems, shiny technologies and architectural patterns was great fun and I enjoyed every minute of it.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whilst recently viewing the code coverage results from one of our applications, I was looking for areas which contained poor code coverage to see if there was any way to improve code coverage in those areas. One area that can be difficult to unit test are exception conditions. If you are implementing structured exception handling using
try / catch blocks, then it can be challenging to unit test the code contained within the catch block. Although most (if not all) unit testing frameworks contain mechanisms for testing exceptions, it can be difficult to set up the conditions that will trigger an exception.
I tend to follow the rule of thumb that states "If you aren't going to handle an exception then don't catch it". There is little point catching an exception if all your code does is throw the exception back up the call stack. In the case of my ASP.NET WebAPI application, all the controllers contain structured exception handling. This is the last chance saloon to catch any exceptions before handing control back to the client, so it makes sense to catch exceptions on the part of the application exposed to the client. I also log all exceptions so that I can later diagnose them.
I also catch exceptions in my data layer. I implement a retry mechansim on those methods that do not use the Azure Service Bus (a service bus architecture will automatically implement a retry mechanism if an exception is thrown and place the request back on the service bus queue where it can be re-tried again). These are the only specific areas of the application where I have implemented structured exception handling.
When implementing the business layer, I wanted to ensure I could unit test the various methods without the data layer methods having to actually connect to the data itself. So I implemented an architecture that allowed this from the ground up. My business layer classes contain a reference to an interface that implements the data handling methods. My unit tests implement this interface with implementations of the various methods under test. This is then injected into the constructor of the business layer class at run time by the unit tests. My business layer class contains a default constructor which instantiates the default SQL Server data layer class. It also contains a constructor which accepts an instance of the interface containing the definitions of methods that have been implemented specifically for unit testing. So with good design from the very outset it is perfectly possible to unit test your entire business layer using constructor injection.
This is the definition of the SQL Server data class. Notice it implements the IMyInterface interface.
public class MyDataClass: BaseData, IMyInterface
{
} This is the definition of the unit test data class. Notice it also implements the IMyInterface interface.
public class MyUnitTestDataClass: IMyInterface
{
} Here is the definition of the IMyInterface interface.
public interface IMyInterface
{
List<Mileage> GetPreviousMileages(int driverId);
List<Driver> GetDriverVehicles(int driverId);
} The business layer class then implements constructor injection so that it can accept either a concrete instance of the unit test implementation or the SQL Server implementation.
public class MyBusinessService
{
private readonly IMyInterface _data;
public MyBusinessService()
{
this._data = new MyDataClass();
}
public MyBusinessService(IMyInterface data)
{
this._data = data;
}
} The unit tests instantiate the business layer by injecting a unit test specific instance of the interface, as in the following example.
[TestMethod]
public void MyTestMethods()
{
MyBusinessService service = new MyBusinessService(new MyUnitTestDataClass());
var result = service.GetPreviousMileages(123);
Assert.IsNotNull(result);
Assert.IsTrue(result.Any());
} I will delve more into unit testing in future articles as it's an area where huge benefits can be made to increasing the quality of the software. It also forces you to write code in such a way that it is unit testable in the first place, and this in itself is a good reason to implement unit testing within a development team. If you're not writing unit tests, you're doing it wrong.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
After nearly twenty years of working as a professional software developer, working in many different industries and a range of diverse companies, it seems to me at least that there are certain key ways in which to build a strong development team. By strong I mean a team that is motivated and hyper
productive. The former generally leads to the latter. So how do you build a strong development team? Here are my thoughts.
Share knowledge and information
To be productive you need to know as much as possible about the various applications, tools, processes etc. that are used by your development team. Even if you are a self employed contractor, you still need an understanding of these things. Keeping information to yourself in a misguided attempt to
make yourself indispensable just means you will get calls out of hours, on weekends or when you are on holiday as you are the only person with that knowledge. Have regular sessions where you take turns going through areas where each of you are an expert and freely share your knowledge. Or where you discuss new ideas or upcoming technologies. If you have read an interesting article then share it with your co-workers. Make sharing ideas part of the fabric of the team.
Set good habits and be a good example
Be the example that other members of the team want to follow. Be professional and conscientious. Don't let sloppy habits or maverick individuals descend on the team. Set good habits with regards to testing, building, shipping, developing and every other aspect of the development cycle. Set the bar high and maintain it. Ensure you have a coding standards document and that everyone knows where it is, and has read it. Have regular code reviews and make this part of the development process. Ensure that testing is not an after thought and that quality is baked in from the ground up, not sprayed on afterwards. Developing software to a high standard should be part of the culture of the development team and inculated so that it remains that way. Writing good software is a habit. Habits require regular reinforcement. Don't let standards slip.
Everyone working to their strengths
As developers, we all have particular skills and knowledge where we excel, and areas where we are weaker. So match the skills to the appropriate developer when working on projects. That's not to say that weaker areas shouldn't be worked on and gaps closed, but simply being pragmatic in working to the strengths of the various individuals within the team. As knowledge is gained and weaker areas become strengths, then those developers can be assigned different tasks that relate to their newly acquired skills.
Make learning a habit
This applies especially to anyone working within software development. Keep your skills up-to-date and encourage your co-workers to do the same. Discuss that article you read the evening before and share a link to it with the other members of the team. Don't ever let your skills become stale, or worse obsolete. If you are not moving forwards, then you are going backwards. Progress stops for no-one, and whilst you may be happy plodding along using those legacy skills today, you don't know what's around the corner tomorrow. Make reading articles or books a regular occurrence. Subscribe to the newsletter from your favourite community. Contribute to an open-source project. Write technical articles and share your knowledge. Become active in any of the many online development communities. I am a regular contributor to the CodeProject[] community for example. Keep learning and keep it fun.
Strong development teams don't just happen by accident. It takes time and effort to build a strong team. But if you are prepared to make the effort, then it is well worth it. Strong teams thrive on challenges. They foster a can-do attitude. They get stuff done. If that's the type of team you want to be a part of, then start building a strong team today.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
In the ASP.NET Web API services I've been developing, I ran into an issue whereby the POST data I was sending wasn't being received by the Controller. The parameter of the Controller was a string as I was sending JSON data, but this wasn't working. I kept getting the error that the POST data was null when I was expecting it to contain my JSON data.
After much head scratching and Googling I eventually found this very useful article[^] that explained my problem and the various solutions. I opted to change my Web API parameter to JToken as this was a simple fix and got me working again quickly. I need to post different shaped JSON documents to the Web API hence why I ran into the problem. The endpoint is also invoked from our mobile apps with Javascript, thus necessating the need to send data as a JSON document. So changing the parameter from string to JToken fixed the issue.
Next problem I encountered was when invoking the Web API endpoint from our mobile app. I was using an AJAX call to POST data but getting an error. It turns out the problem was because I was breaking standard browser security. Browser security prevents a web page from making AJAX requests to another domain. This restriction is called the same-origin policy, and prevents a malicious site from reading sensitive data from another site. However, sometimes you might want to let other sites call your Web API. In our case, we wanted our Web API to be invoked from mobile app clients.
In order to allow this, I needed to enable CORS on the Web API that was being invoked by the mobile apps (there was only one). Cross-Origin Resource Sharing[^] (CORS) is a W3C standard that allows a server to relax the same-origin policy. Using CORS, a server can explicitly allow some cross-origin requests while rejecting others. CORS is a safe and reliable technique for allowing cross-origin requests.
I installed the CORS Nuget package and enabled CORS on the Controller, and everything now works. We can post data from the mobile app to our legacy back-end system via Web API endpoints (which in turn use an Azure Service Bus).
Now that the last pieces of the puzzle have been solved it's good to see all the moving parts in the architecture working together.
Our mobile app user fills out a form and submits the data. The data is packaged into a JSON document and posted to our Web API which adds a message to an Azure Service Bus. An Azure Function that is bound to the Azure Service Bus listening for incoming messages, picks up the message and routes the request onto a routing engine where it is determined where the request needs to be routed to. Requests from our mobile app are routed to our back-end legacy system, where a task is created from the request so that a member of staff can process the request data from the mobile app user.
This all happens seamlessly and is incredibly responsive thanks to a carefully thought out and designed architecture by yours truly
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
I recently had to de-serialize a JSON string that contained a date. The object contained valid DateTime properties and was then serialised. When it was subsequently passed to the receiving application, the serialised object could not be de-serialised as the dates were not in a valid format.
Example of the problem.
string jsonDate = JavaScriptSerializer().Serialize(DateTime.Now);
DateTime dt = new JavaScriptSerializer().Deserialize<DateTime>(jsonDate); When viewing the JSON representation of the date it would appear as follows.
/Date(1484904895490)/ The fix I eventually managed to figure out that solved the problem was to use the JsonConvert.SerializeObject() function instead.
var isoConvert = new IsoDateTimeConverter { DateTimeFormat = "yyyy-MM-dd HH:mm:ss" };
DateTime dt = JsonConvert.SerializeObject(jsonDate, isoConvert); This solution works with any object that contains dates (or even those that don't include dates). I now use this code in my serialization function throughout my application.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Whilst testing out the Azure Function I had written to listen for incoming messages to an Azure Service Bus queue, I ended up with quite a few messages landing in the Dead Letter Queue. A combination of messing around whilst trying to get something to work.
Just to be clear, I am using an Azure Service Bus queue, not to its sibling the topic / subscription. My task was to figure out how to remove these messages. Even though it was a test queue that I was playing around with, I thought it would still be a useful exercise to learn how to clear messages from the Dead Letter Queue as this is something I would almost certainly need to know once we went live with the production queue.
As with many tasks relating to the Microsoft stack, there seemed to be many different ways of achieving this, and it was getting confusing trying to find which one was applicable to my particular circumstances.
After some trial and error I eventually managed to get the following code to work. Basically you connect to your Dead Letter Queue in exactly the same way as your normal queue, but you need to contatenate "$DeadLetterQueue" to the queue name.
[TestMethod]
public void ClearDeadLetterQueue()
{
string deadLetterQueueName = "myQueue/$DeadLetterQueue";
QueueClient client = QueueClient.CreateFromConnectionString("connectionString",
deadLetterQueueName, ReceiveMode.PeekLock);
while (client.Receive() != null)
{
var receivedMessage = client.Receive();
receivedMessage?.Complete();
}
} After running the following code in my unit test I successfully managed to clear all the messages from my Dead Letter Queue.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|
Several of my previous posts have documented some of the challenges and proposed solutions that I've encountered whilst migrating our ASP.NET Web API services over to Azure. Here is a summary of the entire journey thus far.
As part of the work involved in delivering the new version of a mobile app to the market, we decided that we wanted to migrate the underlying infrastructure to the Azure platform as it would provide newer, faster infrastructure. This involved two primary objectives.
- hosting the ASP.NET Web API services on Azure
- implementing a service bus architecture with Azure Service Bus
Hosting the ASP.NET Web API services on Azure involved creating an Azure Web Site which would host the services. I subsequently made the necessary changes to the Team Foundation Server 2015 deployments by creating a new release / deployment for Azure. Each time a build is triggered, we deploy the new version to our Azure endpoint. Using Azure for hosting our services ensures maximum levels of availability and scalability, ensuring we can meet not just current demand but future demand too.
Configuring Application Insights to monitor our services is simple. Although our services don't need to be hosted on Azure to use Application Insights, it is easier if they are. So we have constant monitoring of our services, giving us regular metrics on the health and diagnostics of our services.
Implementing a service bus architecture using Azure Service Bus proved challenging as it requires a mental shift in how you think about services. In traditional service architecture, one service synchronously invokes another service. In a service bus arthitecture, services do not communicate directly with each other. Instead, all requests for services are added to a service bus queue, where they will be picked up and processed by a separate out-of-band service.
All data submitted from mobile devices through the app would be routed to an ASP.NET Web API service that would simply add the request onto the Azure Service Bus queue. This ensured all service requests would be highly responsive as the service was doing nothing more than adding a message to a queue and would be available to service further requests almost immediately. The actual processing of the request was fulfilled by a separate service.
Due to the disconnected nature of service bus architectures, whereby the recipient and client applications communicate via a service bus rather than directly with each other, it is impossible for the recipient application to know when a client application has submitted a request. What is needed is a polling mechanism that will constantly poll the Azure Service Bus looking for new messages. I achieved this using an Azure Function which is bound to the Azure Service Bus listening for incoming messages.
Upon receiving a new message from the Azure Service Bus, the message is processed by a routing service that routes the request to the appropriate ASP.NET Web API. The querystring parameter is used to indicate to the routing service what destination service it needs to be routed to. This ensures that the routing service forwards requests onto the appropriate service endpoints where the necessary business logic is implemented. The routing service also adds a high degree of flexibility for future development.
Using a combination of Azure's web hosting, Service Bus and Functions I have successfully delivered an end-to-end solution for processing requests from a mobile app which adds significant levels of scalability, resilience and responsiveness. As the chief architect on the project, and deeply involved in the majority of the implementation, as well as all aspects that involved Azure, I feel proud and excited by the results of the project. It has forced me to learn new ideas and technologies and has been a lot of fun.
"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare
Home | LinkedIn | Google+ | Twitter
|
|
|
|
|