Azure Kubernetes Service (AKS) is an excellent service for hosting your microservices in the cloud. In the previous series, you learned how to build a simple containerized Java application and deploy it to a Kubernetes cluster on AKS. If you are not familiar with how to do this, you may want to start with that series before continuing. This article uses Quarkus to create fast, efficient, containerized Java microservices. More specifically, we’ll create an application for managing shipping routes for an imaginary logistics company.
Applications deployed using Kubernetes have their components organized into containers. Each component has minimal dependencies and contains everything needed to run the code. You can also organize containers into a unit that shares networking and storage resources. This collection of containers is known as a pod.
Containers within a pod share the same IP address and a set of storage volumes. They can communicate with each other through localhost network connections or shared files. Think of a pod as a logical host. Generally, you only want to place containers in the same pod if they must be tightly integrated. The containers that run within a pod start or stop together. Once deployed, you can replicate, run, and destroy the pod as needed. When you destroy a pod, you entirely lose its state. If you have any data that you want to persist, you should save it in a resource outside of the pod (such as a StatefulSet).
Strategies for organizing containers to interact cooperatively have led to several design patterns. Let’s look at three of these patterns.
One commonly used pattern is the sidecar pattern. In the sidecar pattern, one container holds most of an application’s logic, while another holds additional functionality. You might use this pattern if it is not easy to modify the original application. Suppose there is a legacy application that only supports communication over HTTP. In that case, you can use the sidecar pattern to add a component that communicates with SSL to transform the legacy application’s communication to SSL. Note that the legacy application does not necessarily know about the other container within its pod.
Another typical pattern is the adapter pattern. The adapter pattern gives a service a new interface that is compatible with the client’s needs. For example, say you have a service that uses SOAP-based calls, but your application requires REST-based calls. With the adapter pattern, you could write a REST interface that makes the necessary calls to the SOAP service and translates the data. This pattern is useful when adding third-party components to a solution. With adapters, you can make the API for third-party components conform to the conventions and design of other services that an application uses. This makes the services that an application uses more unified.
The ambassador pattern is similar to the adapter pattern. While the adapter pattern allows an application to provide some interface to the rest of the world, the ambassador pattern allows an application to interact with other resources using a different interface. The ambassador pattern performs the translation to the interface that the application needs.
I created the starting point for this project using Maven. I used this command.
mvn io.quarkus:quarkus-maven-plugin:1.7.0.Final:create \
This command generates a project that contains a Docker file. We can use it to run the project in a container. Within our application, more than one container runs within a pod. This is analogous to having more than one process running on the same machine. We want these containers loaded in the same pod. Containers in the same pod share the same resources, including network resources. They can communicate with each other's localhost. To ensure that both containers are in the same pod, declare both in the same container element in the YAML used for configuring the application. The following is a partial YAML file for a pod with two containers.
- name: distancecalculator-container
- name: distanceconverter-container
If you run the kubectl utility on this file, it loads the containers into the same pod. We will see how to use that utility in a moment.
Setting Up the Project
Now that we have covered some basics, let’s start with building our shipping management application. Internally, the application uses miles as the unit of distance for the API calls that it makes. Around the world, kilometers are a standard unit of distance.
You can use the sidecar pattern to translate the calls from miles to kilometers. If the source code of the original application is not available, using the sidecar pattern here would be one way to modify the application’s behavior.
Creating a Kubernetes Cluster
You may already have a Kubernetes cluster setup from the previous series of articles. If you do, then jump to the next section, "Connecting the Azure Shell to your Cluster." If you do not, in the Azure portal, in the search text box, type "Kubernetes Service" and select it from the results.
Click +Add to create a new Kubernetes cluster. Select a resource group and give the group a name. Then, select Review and Create. On the review screen, select Create. After a few moments, the cluster resources are available.
Connecting the Azure Shell to Your Cluster
With the Azure Cloud Shell, you can deploy directly from your machine to your Kubernetes cluster within Azure. The Cloud Shell is available for Windows, macOS, and Linux. Download and install the version for your computer’s operating system from the page Install the Azure CLI. After you install it, open a command terminal to connect the shell to your account. Use the command "az login." The Cloud Shell prompts you for your account username and password to log in.
After connecting the shell to your account, install kubectl in the shell with the following command.
az aks install cli
Open the Cloud Shell and get the credentials for your cluster. Use the following command to do this.
az aks get-credentials --resource-group <resource-group-name> --name
Verify the command was successful by listing your Kubernetes nodes.
kubectl get nodes
Clone the application source code to the Cloud Shell so that you can deploy it from there.
git clone https:
To deploy the application, use the Kubernetes Control’s command-line utility to issue a YAML command, as follows.
kubectl create -f logistical.yaml
Now the application information on shipping locations is in kilometers. One microservice within the pod performs calculations using miles, while another is transforming the output to kilometers.
If you would like to set up an automated pipeline for deployment, log into your Azure DevOps page. Create a new project by clicking New Project. Enter a project name and description. Then, click Create.
After you create your project, configure your pipelines. From the left panel, select Pipelines (on the next page).
Then select Deploy to Azure Kubernetes Service.
A dialog box appears asking you to select your cluster, container registry, the port the application uses (port 80), and a friendly name for the cluster. A YAML file containing your configuration appears. Select the option Save and create. After a few moments, the service deployment begins.
Using the automated pathway, you can automatically deploy new builds when there’s a change to the main branch of code. This reduces the effort in ensuring that a service is up to date.
If you would like to learn more about other patterns for containerized applications, look at the free e-book collection for Kubernetes and Azure. The book "Designing Distributed Systems" focuses on design patterns. The e-book "Hands-on Kubernetes on Azure" contains additional information about managing and deploying container applications on Azure.
In the next article of this series, learn about monitoring and scaling containerized apps.