Click here to Skip to main content
14,634,072 members
Articles » Cloud Computing » Azure » General

Deploy persistent storage on Azure with Kubernetes

Rate this:
5.00 (3 votes)
Please Sign up or sign in to vote.
5.00 (3 votes)
12 Dec 2017CPOL
Step by step to deploying persistent storage volumes in Kubernetes on Azure.

Introduction

In the previous article in this Devops series, we looked at deploying a production ready Kubernetres cluster on Azure using the 'KubeSpray' project. Having set that up, the next step is to make some data storage resources available to the cluster. We do this by creating an Azure file storage resource, and then linking it to the Kubernetes cluster using 'Secrets'. This article is a walk-through of the process.

Background

I can't really think of any reasonable project I've been involved in that didnt have the use of some data storage facility. When you head intot he world of cloud, and containers, having a handy C:\ or D:z\ drive at your fingertips gets elusive. You start having to think of things like persistant volumes and cloud based blob storage and the like. Using Kubernetes, we can set up a data volume that appears to our container resources just like another large remote drive. This is a key piece of the puzzle when pulling different technologies together in an orchestrated manner like we are doing in this series of articles.

Setting up a Kubernetes data volume on Azure

(1) In your control panel, from NEW action, select general storage resource
 

Image 1

(2) give the resource a new unique name (lowercase), and use the same resource group as your main Kubernetes cluster

Image 2

(3) the new resource will become available in the resource group - select it for editing 

Image 3

(4) in the details page, select the 'file' share section

Image 4

(5) n the file share page, click add-new, give the share a unique name, and specify the size of file storage you require (in GB)

Image 5

After saving, you should see the newly created share available for use.

Image 6

(6) Next we need to get some security keys from the storage account.
Go back to the main resource group list where the account is located and select it.

Image 7

(7) Once in the storage account, we then need to navigate to the 'Access keys' section and copy out the first key and the resource name.

Image 8

(8) We cant use these keys directly in kubernetes and need to convert them using Base64. We can do this by taking each one of the items we copied and encoding. In this example we use the online resource www.base64encode.org
 

Image 9

(9) We now need to take this information and give it to Kubernetes in a Yaml file.

SSH into the main Kubernetes master machine, and issue the following commands:

sudo su -

apt-get update

apt-get install -y cifs-utils

Image 10

Now use nano to create a new yaml file:

nano azure-secret.yaml

Into this file, add the following contents, *replacing* the value for accountname and accountkey with the base64 encoded values of each (be careful with indenting/spacing in yaml files)

 

apiVersion: v1

kind: Secret

metadata:

  name: azure-secret

type: Opaque

data:

  azurestorageaccountname: <your encoded account name>

  azurestorageaccountkey: <your encoded key>
 

Image 11

After making the changes, use CTRL + O <enter> to write the file, then CTRL + X to exit

Image 12
 

This has set up the 'key secret' file, now we need to set up the main instruction file. In this case it has been called 'azure.yaml' but it can be given any name.

The contents of this file are as follows:

 

apiVersion: v1

kind: Pod

metadata:

 name: shareddatastore

spec:

 containers:

  - image: kubernetes/pause

    name: azure

    volumeMounts:

      - name: azure

        mountPath: /mnt/azure

 volumes:

      - name: azure

        azureFile:

          secretName: azure-secret

          shareName: kubedatashare

          readOnly: false

 

  • The important parts that you need to change are:
    • volumeMounts - given name of 'azure' and an internal virtual mount path. The name should match the name of the volume (next step/file-entry)
    • volumes → name → this has been set to a default name of 'azure'. The next entry is 'azureFile' (defines the type of storage volume to Kubernetes). The 'secretName' refers to the data in the 'azure-secret.yaml' file we created earlier, and the shareName is the name we gave to the file share we created in step (5). Setting readONly to false makes the volume available as read/write.
         

(10) We now need to pass the secret key to Kubernetes and then set the volume running.

At the command line, send in the following command:

kubectl​ ​create​ ​-f​ ​azure-secret.yaml

Once that has completed, the secret has been set, so we can send in the command to setup the volume itself.

kubectl​ ​create​ ​-f​ ​azure.yaml 

Once that completes, we can then test that everything has been setup correctly by examining available pods

kubectl get po

This will show an output like the following

Image 13

Finally we can examine how the volume has been implemented to confirm it is as specified

kubectl describe po azure

Looking at the output of 'describe' you can see important information such as the node (VM) the containers is hosted on, the fact that it is a 'secret based' volume, and that it is connected to an Azure file service.

Image 14

(11) The volume can now be directly accessed by any container .. we will cover how to do this in detail in a later article.

If you need them, I have put a shortened version of the instructions in a zip attached to the top of thie article. Finally, as usual, if you found the article useful, please give it a vote!

 

Links of Interest

https://kvaes.wordpress.com/2017/05/19/azure-container-service-using-the-azure-file-storage-as-a-persistent-kubernetes-volume/

https://docs.microsoft.com/en-us/azure/container-instances/container-instances-mounting-azure-files-volume

https://kubernetes.io/docs/concepts/storage/volumes/

History

7/Dec/2017 - Version 1

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

DataBytzAI
Chief Technology Officer The DataWorks
United Kingdom United Kingdom
Allen is a consulting architect with a background in enterprise systems. His current obsessions are IoT, Big Data and Machine Learning. When not chained to his desk he can be found fixing broken things, playing music very badly or trying to shape things out of wood. He runs his own company specializing in systems architecture and scaling for big data and is involved in a number of technology startups.

Allen is a chartered engineer, a Fellow of the British Computing Society, and a Microsoft MVP. He writes for CodeProject, C-Sharp Corner and DZone. He currently completing a PhD in AI and is also a ball throwing slave for his dogs.

Comments and Discussions

 
QuestionAnother good one Pin
Sacha Barber12-Dec-17 22:54
MemberSacha Barber12-Dec-17 22:54 
AnswerRe: Another good one Pin
DataBytzAI13-Dec-17 11:04
professionalDataBytzAI13-Dec-17 11:04 
GeneralRe: Another good one Pin
Sacha Barber13-Dec-17 16:37
MemberSacha Barber13-Dec-17 16:37 
Questionall images are missing Pin
Mou_kol7-Dec-17 20:51
MemberMou_kol7-Dec-17 20:51 
AnswerRe: all images are missing Pin
DataBytzAI10-Dec-17 2:44
professionalDataBytzAI10-Dec-17 2:44 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Article
Posted 7 Dec 2017

Stats

8.6K views
1 bookmarked