There is no doubting that the emergence of cloud computing has been a game changer for application development. In this article I will be looking at specific differences between the more traditional on-premises deployment model and cloud computing and describing what these differences mean to application development.
The key differences
- Adding hardware is expensive
- Cost is largely independent of usage
- Reliability is under our control
- Adding hardware is cheap
- Cost is directly related to usage
- Reliability is SLA driven
- Physical security of the data is possible
- Physical security of the data is not possible
Dealing with problems
Traditionally we specified and bought hardware that had fault tolerance built in and wrote software that was fault intollerant to run on top of it. This meant developers made assumptions about availability, bandwidth and response times and any problems there arising were dealt with on a break-fix basis (often by improving the hardware specification).
In a cloud deployment model the promise of hardware fault tolerance is replaced with a service level agreement - a percentage based promise. This means that you should make your software fault tolerant too - you need to retry operastions if transient faults are encountered and you need to think about graceful degredation if you don't get the hardware reliability you expected. A pull-the-plug exception handling methodology is no longer viable.
Some companies, in an effort to break developers hardware over confidence, go as far as deploying a chaos monkey to deliberately reduce the predictability and reliability of the hardware they are working on.
When the hardware is sitting in your own data centre the actual cost of purchase it tends to be hidden from the application developement team - it is typically a sunk cost that is amortized and spread across the whole company. This is not the case for cloud deployment - typically you are going to get charged for whatever you use in terms of storage, network IO and compute time.
This means you are going to want your application to be able to scale up when demand increases but also to scale back down again (releasing unused resouces) when demand decreases.
You also need to design your application to avoid bottle necks. How this is actually acheived varies on a case by case basis but the underlying thought should alsways be "how can I make this process parallel".
In a cloud deployed environment you cannot guarantee that your application will be hosted on a specific machine. This means that any intra-application communication will need to be done using a messaging providor or a message queue with location independence as a primary design consideration.
Coding for a land of plenty
Cloud deployment makes such an enormous amount of computing available that it is worthwhile considering how to code for a land of plenty.
In practice this means storing all the data that comes into your system, not just that which you currently have a known need for. Tools such as HDInsight mean that this data will be useful to someone someday.
Coding for a time of scarcity
There is one resource that has not become more plentiful in a cloud deployed scenario - developers. Fortunately there is a large amount of code already built that can be leveraged - either open source or as paid plug-in services. The developers mantra here should be "only write the code that only you can write".
2014-08-13 Initial ideas