Click here to Skip to main content
15,747,766 members
Articles / Containers / Docker
Posted 6 Dec 2016


9 bookmarked

A Quick Introduction to Server-less Computing

Rate me:
Please Sign up or sign in to vote.
4.98/5 (14 votes)
7 Dec 2016CPOL10 min read
Understand what Amazon AWS Lambda and Azure functions are and how they emerged.


The very first website I was involved with was back before some even knew what a website was, in the late 1990s. At the time, the project undertaking was massive ... I mean, really really big. Having decided what the service was to do, we then put together a project plan and a budget. These were our project costing estimates back then:

  • Purchase of servers: $250,000
  • Hosting of servers for first year: $75,000
  • Engineers and other 'people' related costs: $350,000

In addition, to ensure we gave our project success, we had to move to a new office nearer the digital exchange so we could get a FAST GUARANTEED 128k <ahem> speed Internet connection.... it was a *big* project.... :)

Punch cards 

Don't get me wrong, it wasn't  something from the dark ages ... it didn't involve punch-cards or the like, but it was a big project at the time... if you converted it to today's projects, it was a basic website - back then, it was a career move!

So, for what in retrospect was a pretty basic website, it seemed extraordinary complex and BIG, and it used correspondingly big resources. Yesterday, I had an idea for a new SaaS website ... and after spending 30 minutes slapping AngularJS and EntityFramework into place to spin up my CRUD, here's what I did...  in Visual Studio, 'File -> New -> Website (cloud publish).

Image 2

That's pretty awesome... my first website cost over half a million dollars, and took months and months to push through to completion. Yesterday, I published one in a few minutes that will cost me less than a dollar a month to run... 

Gone are the days when I had to publish to a particular folder, ftp the code/DLLs and supporting files up to my server host, beg them to allow me to include some kind of new non standard plugin or DLL that I *really really needed* for this particular website (so I could get paid)... and then hope everything connected as it should and that my website would deploy. Only the Gods themselves could help me when (not if) something went wrong, because there was very little support for logging or debugging on hosted machines at that stage ... and even now, some hosts are still in the dark ages in this regard.

Evolution of Hosting and Cloud Offerings...

So how did we get to this stage where we can do web-stuff at the speed of light, and what exactly is serverless computing and a cloud function? Let's step through the progression of things over the past few years and see how things emerged...

Local Servers 

15 years ago, it was commonplace to have a server in your office (or home), that was directly connected full-time to the web - by this, I mean you had a physical server, with a fixed IP, that physically sat in your office and was exposed to the web. Some places still have this kind of setup, but it's becoming increasingly rare for obvious reasons. When we wanted to publish a website or expose a service on the net, it was really easy - primarily because we have *control* ... we had direct access to the physical machine, and could do *what we liked* on that machine ... install this or that DLL, this or that service or that exotic ActiveX plugin that your website simply could NOT live or work without....

The problem with this setup is it was wasteful, expensive, and left you to be the manager of the entire infrastructure, not just the website you built.

Co-hosted Servers

Someone then came up with the idea of renting out 'rack space' in their data centres to people who had their own physical machines and wanted to offload some of the management work. This meant that you still owned the physical machines, but you could send it to a dedicated space where people knew how to keep the 'lights on' for Internet facing servers, and would also take care of backups, database connectivity, keeping servers patched, security, etc. Stuff that to be honest, website developers really shouldn't need to worry about... it should just be part of the plumbing at this stage...

The thing with co-hosting your own server however, is that it's still your server, and while someone else is managing the machine and its related overhead issues, it's not terribly efficient ... the machine is not being used to its optimal, and perhaps 80%+ of its actual compute life cycle was spent just lying idle. It's money being wasted that could be put to better use elsewhere.

Shared Servers

The next step in the evolution of things was shared servers. This is where a rack-space hosting company installed some fancy-pants software on *their own* hardware/operating system, that allowed them to isolate certain folders/processes, and share resources, among a number of different website customers. This had the immediate benefit of using the server for a lot more of the time, at the cost of flexibility to the customer. The customer then got access to a 'control panel', and used this to interface with their services that they could install/run/upload, etc.

One of the major problems with shared servers is that things are shared ... the operating is shared, the machine resource are shared... and if for example one process goes 'rogue', and hogs 100% of the CPU for a space of time, well, any other websites/processes on that particular machine will suffer as a consequence... not a great situation really.

Virtual Servers

Some clever person (I'm pretty sure it wasn't a rabbit) then came up with the clever idea of virtual machines. This is where we have a system that can very specifically isolate parts of a servers hardware and present it to an Operating System, which sees these isolated parts of the server 'as the entire server' ... think about that - it's like we take a snapshot of an entire machine, and break it into chunks, and hand this to full operating systems and say 'this is all you get'. This was very cool, because we could now for example take a powerful machine, with say one motherboard, 1gb of HD space, 32 GB ram, and virtually split it into for example 8 virtual machines with 2GB of ram, and 100GB of HD space each .. one of which would be more than enough for a reasonable instance of SQL Server say, and still have enough carved out to still run the underlying Virtual Machine operating service itself.

Virtual servers worked great, and still do, but they still have a limitation - they are a full-blown virtual operating environment ... and if you need to spin up a full OS for each service that needs it. That's kind of wasteful and not optimal on resources  (wow, how far we've come already from the days of dedicated machines in your office!)

Containers & Docker

Right then, the next major step in this evolution was the move towards containerisation. What container technology allows us to do, is to create an isolated block of resources *within a machine* (virtual or otherwise), and share the underlying machine resources, without bleeding into other services that are using the same resources. So in effect, its like a shared server, but without the side effect of one installed system having unintended control over another due to bad resource management. The other major benefit of a container, is that you can specify particular versions of system level dependancies. Let's say, for example, your system needed a particular version of a DLL or other installable... but the problem is, it's a custom or even older out of date version of that dependancy, and it's not compatible with other processes that *share* that resource/dependancy on the virtual machine. With containers, you can isolate dependencies like this, and keep them effectively fire-walled from one another. The added benefit of this is that if something *works on your machine*, then you can take a snapshot of this configuration, and transport it to an online host (or another developers machine), where it *will simply work*, no questions asked. This is an incredibly powerful feature of containers and worth checking out for this alone if nothing else.

Enter the Function as a Service

The container paradigm moved us into an area where we can have 'surgically sliced up parts' of the OS just for our simple system. We can use containers to host simple, single services like for example an SQL server, or more complex arrangements of different services in combination. But sometimes, we need something leaner again. Sometimes, we need a simple little 'thing', where you think, actually, this one particular part of my system/architecture would be better just being in a shared instance ... doing its thing when I need it, but without the overhead of a virtual machine or even having to manage (and orchestrate) a container. Well, now you can have your cake and eat it ... enter the 'function as a service', or 'server-less computing'. Presented by Amazon AWS as a 'lambda' service, and Microsoft Azure as 'Functions', you can now write a simple function, with no supporting website or container, and simply say 'run this when X happens'. This image explains how I feel about this...

Image 3

Functions as a service allow us to write a simple function/method that does something on the web, and deploy it to run, without having to worry about the underlying infrastructure, without having to worry about setting up a container or virtual machine, and without having to worry about all of the usual things we need to do to even get to a starting point.

It is defined as 'server less computing' because simply that is what it is ... the ability to write a function (or set of functions in reality), and deploy these to what seems to us, like a server-less environment. The cloud provider worries about the deployment, isolation and critically, auto-scaling where necessary. Unlike virtual machines and Azure 'Platform as a Service' type offerings, Functions are not changed by the hour, but rather by the execution of the function and on the micro-second. This raises a really interesting question ... in the past, and now really, we look at our applications and think 'where is the bottleneck ... where is slowing things down' ... well, with the introduction of server-less computing and charging *by the function*, we could now look at things from a 'what FUNCTION is costing the business the most money ... really, really interesting stuff. But I digress....

Let's say our customer came to us and said 'hey, can we implement a simple thing that when an image is uploaded, we check to see if it's within our specified dimensions, and, if not, we edit/resize the image to make it fit our requirements?'....

Before Functions, we would have had to either add this new functionality to our existing web offering in code somewhere, integrate it, and upload the changes. With functions, we can simply go into an online editor, define an endpoint (in this instance, of where to monitor for incoming image files), and write some code that will be processed when the image lands. And that's it. No hosting setup, not even 'file | new | project' for goodness sake!  (well, you can do that, but actually it's not needed).

Image 4

Now, Functions in Azure and AWS are not for everything, and like a lot of new tools they have the possibility of being abused for the wrong things. However, I truly believe that in the area of Micro-services, Functions hold great promise and are very much worth investigating.

Resources To Get Started

To get stuck in (and you should, really!), go check out Amazons Lambda offering and Azure functions. You won't be sorry you did.

Two videos that will bring you up to speed:


AWS Lambda:

I'll follow this article up shortly with some examples to get you started.

Image 5


  • Version 1 - 6 Dec 2016: Initial post
  • Verison 1.1 - 7 Dec 2016 - Change to bob's mind being blown!


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Written By
Chief Technology Officer The DataWorks
United Kingdom United Kingdom
Allen is a consulting architect with a background in enterprise systems. His current obsessions are IoT, Big Data and Machine Learning. When not chained to his desk he can be found fixing broken things, playing music very badly or trying to shape things out of wood. He runs his own company specializing in systems architecture and scaling for big data and is involved in a number of technology startups.

Allen is a chartered engineer, a Fellow of the British Computing Society, and a Microsoft MVP. He writes for CodeProject, C-Sharp Corner and DZone. He currently completing a PhD in AI and is also a ball throwing slave for his dogs.

Comments and Discussions

GeneralMy vote of 5 Pin
Igor Ladnik7-Jan-17 17:43
professionalIgor Ladnik7-Jan-17 17:43 
AnswerRe: My vote of 5 Pin
DataBytzAI8-Jan-17 3:52
professionalDataBytzAI8-Jan-17 3:52 
QuestionPicture Pin
Nelek6-Dec-16 21:01
protectorNelek6-Dec-16 21:01 
GeneralMessage Closed Pin
7-Dec-16 1:40
professionalDataBytzAI7-Dec-16 1:40 
GeneralRe: Picture Pin
Nelek7-Dec-16 1:41
protectorNelek7-Dec-16 1:41 
GeneralMessage Closed Pin
7-Dec-16 1:43
professionalDataBytzAI7-Dec-16 1:43 
GeneralRe: Picture Pin
Nelek7-Dec-16 1:45
protectorNelek7-Dec-16 1:45 
GeneralMessage Closed Pin
7-Dec-16 1:49
professionalDataBytzAI7-Dec-16 1:49 
GeneralRe: Picture Pin
Nelek7-Dec-16 2:24
protectorNelek7-Dec-16 2:24 
GeneralRe: Picture Pin
Nelek7-Dec-16 2:25
protectorNelek7-Dec-16 2:25 
GeneralMessage Closed Pin
7-Dec-16 2:26
professionalDataBytzAI7-Dec-16 2:26 
GeneralRe: Picture Pin
Nelek7-Dec-16 2:28
protectorNelek7-Dec-16 2:28 
GeneralRe: Picture Pin
Sean Ewington7-Dec-16 4:06
staffSean Ewington7-Dec-16 4:06 
GeneralRe: Picture Pin
Nelek7-Dec-16 5:25
protectorNelek7-Dec-16 5:25 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.