Here we'll provide a lay of the land, look at concepts that underpin Cloud Native development (concepts like DevOps, microservices serverless, containerization, Kubernetes, scalable cloud databases like CosmosDB), explain how these concepts affect the design and architecture of cloud-native apps, and highlight how they differ from the Wild West approach of Node apps deployed on an ad-hoc basis to VMs.
One major advantage of Node.js is its relatively low barrier to entry. Beginners and seasoned programmers alike can use Node.js to quickly write up small web services.
Startups were especially fast to notice the appeal of Node.js, promptly adopting it for day-to-day tasks. However, it also gained early popularity at large enterprises, like banks, usually among digital innovation teams tasked with modernizing companies’ software approaches.
Node.js is now a serious force to be reckoned with and many of its projects have grown in size and scope. The once popular Wild West, ad-hoc approach of writing a service and deploying it to a virtual machine (VM) is behind us. As companies seek more mature deployment models, the cloud native approach may well be what they are looking for — for many of the same reasons Node.js became so popular to begin with.
Cloud versus Cloud Native
Not everything that runs in the cloud is cloud native. On a high level, cloud native means your cloud provider takes care of any hardware and machine maintenance so you don’t have to. In practice, this means your applications are highly scalable and dynamic.
The Cloud Approach
You’re running software in the cloud, but you’re not cloud native. Renting a server in the cloud is not much different from having your own server.
For example, when spinning up a VM and deploying your application, you still have to pick a VM with enough disk space, memory, and processor speed to support your workloads. Furthermore, it’s your own responsibility to keep your operating system (OS), and any other software you install, up to date.
Installing your application probably requires you to install and configure a web service. Scaling is challenging as it forces you to have a second VM ready that can take on some of the workload, requiring a route balancer and other tools. This is mostly manual work which specialized system administrators usually perform.
The Cloud Native Approach
Now consider cloud native. You pick your cloud service, spin it up, and it’s just there. A web application, for example Node.js, could be an App Service on Azure. You simply pick your plan, which is comparable to picking the hardware of your VM, but is slightly easier and gives you the ability to change later. Then, you deploy your application, which can be as simple as uploading a package. After that, it just runs. When your app is busy, you can scale up by literally pushing a button. Even better, you can set thresholds so Azure scales your app up and down automatically.
You’ll probably need a database too. There are a couple of cloud native database options, like Azure SQL and Cosmos DB. Cosmos DB is interesting as it fully embraces the cloud and is highly scalable. The best part: it has a MongoDB interface so it probably already works with your MEAN stack application. You can get a MongoDB instance up and running in a few clicks.
With cloud native, it’s as easy to set up some infrastructure and host your application as it is to build a service in Node.js. The two complement each other greatly and Node.js runs on many Azure services.
The Power of Automation
One of cloud native’s cool features is that it is easy to completely automate. Your cloud resources can be created and configured by an Azure Resource Management template (or ARM template), which is a JSON file describing your desired service or services. Another option is PowerShell or batch scripts or, when you run on Linux, PowerShell Core or shell scripts.
By automating your infrastructure releases, you can easily spin up new environments and bring releases back to software teams. This decreases the chance of deployment errors while speeding up deployments. Automation allows for the adoption of DevOps teams: development and operations coming together for the shared goal of releasing stable software.
DevOps and cloud native go well together. DevOps teams strive to shorten the development cycle and practice continuous release and deployment. That is, as soon you commit code to your code repository, usually Git, it’s automatically built and tested (also called continuous integration, or CI) then deployed to various environments (continuous deployment, or CD). Together, this automated building, testing, and deployment is CI/CD.
In ye olden days, operations would take a software package and install it on a server, often without even meeting a developer. In DevOps teams, operations works together with developers to ensure environments check all the boxes needed for the software to run.
Azure DevOps services help DevOps teams implement Agile or Scrum methodology and CI/CD.
Modularity and Scalability with Microservices
Many applications, especially older applications, were once written as one huge package, also called a monolith. If something broke in that package, everything went down and the entire application was unavailable.
This didn’t always make sense. For example, if the sales module breaks, there’s no reason why helpdesk employees shouldn’t be able to consult client information. Scaling is a challenge in monoliths, as you have to scale everything or nothing.
With the rise of cloud native and DevOps, it became easier to deploy applications, to the point where everything runs fully automatically. This offers new possibilities like microservices. With microservices, instead of writing and deploying one huge application, you write and deploy several smaller, loosely-coupled applications that work together. Then, if one goes down, other parts of the system still function normally.
Microservices enable scaling, as it’s easier to scale parts of a system. As an added bonus, it’s also often easier to maintain a few smaller applications than it is to maintain one giant application, since the latter is probably much more complex. Plus, if you so require, you can write different parts of the program in different languages.
Azure’s various services run microservices and all of them fully support Node.js.
Containerization with Docker and Kubernetes
While microservices certainly solve some challenges of traditional software development, they also introduce some new challenges. It’s not easy to run and deploy tens or even hundreds of services, especially when these services have various dependencies or are written in different languages. Containerization aims to solve that problem, and by far the most popular container solution is Docker.
A container is best described as a lightweight VM, though it is not technically a VM. A container runs on a host and, unlike a VM, makes use of the host’s OS. However, a container acts sort of like a sandbox, in that it’s almost completely shut off from the rest of the system. A container can contain a specific version of a specific runtime for a specific language that’s needed to run your application. Another container, that runs on the same host, may have that same language, but with another version. A container can pack together all the dependencies you need to run your application. This enables you to run your software in a completely customized container, and simply ship the container for anyone to run.
The next challenge is dependencies between microservices and ensuring they stay up and running. This is why we have container orchestrators. The most popular by far is Kubernetes, which works perfectly with Docker. With an orchestrator, you can describe which services should run where, how many instances you need, when they need to scale, and more — basically, how your environment should behave.
Azure has cloud native solutions for both Docker and Kubernetes.
Next, there is serverless, which takes the notion of cloud native a step further. The name serverless is a bit of a misnomer, as your applications still run on servers. However, the big difference between serverless and regular services is that, with serverless, when your app is not running, you are not occupying a server and you’re also not paying for it.
The most obvious example of a serverless resource is an Azure Function. This is a bit of code, an actual function, triggered by various means, for example, by an HTTP request. Whenever the Function is triggered, it places your code on a server, starts the server, runs the code, and shuts down after a few seconds of idleness.
The triggers include Service Bus, Cosmos DB, Blob storage, timer, and the aforementioned HTTP trigger. Since your code isn’t constantly running or even installed on a server, startup time takes a performance hit. However, since it’s so easy to run an instance of your code, it’s also easy to scale to extra instances. With Functions, you can run hundreds of instances of your code simultaneously!
Unfortunately, Functions have limits, like the duration they can run, their relatively long startup time, and the fact that your environment is gone after a few minutes, making it challenging to keep any state in your application. However, for some workloads, Functions provide a great outcome.
Cloud native embraces the quickness and ease that made Node.js so popular in the first place while providing tools for modern software development. With cloud native, you can keep the speed and flexibility of Node.js while turning away from the ad-hoc development and deployment style to build better and more reliable software.
In the next post in this Cloud Native series, we’ll deploy our own Azure Function using Azure DevOps and TypeScript.