Getting started with the Kubernetes CSI driver
The purpose of this post is to provide an introduction to “Kubernetes CSI” with examples you can use to build your own projects. Before we start, if you need to build your own Kubernetes cluster to follow along this write up you can follow these instructions. The files for this blog post including the “simplog” Python code are in the “CSI” folder of this GitHub repository
https://github.com/cermegno/kubernetes-csi
Prior to CSI, Kubernetes provided in-tree (ie as part of the core code) plugins to support volumes but that posed a problem in that storage vendors had to align to the Kubernetes release process to fix a bug or to release new features amongst other problems.
CSI (Container Storage Interface) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI, third-party storage providers such as DellEMC can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This gives Kubernetes users more options for storage and makes the system more secure and reliable.
CSI introduces the three important concepts PV (Persistent Volume), PVC (Persistent Volume Claim) and SC (Storage Class). These are used to provision storage dynamically to pods in Kubernetes. The process is depicted in the diagram below and consists of 3 steps:
https://github.com/cermegno/kubernetes-csi
Prior to CSI, Kubernetes provided in-tree (ie as part of the core code) plugins to support volumes but that posed a problem in that storage vendors had to align to the Kubernetes release process to fix a bug or to release new features amongst other problems.
CSI (Container Storage Interface) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI, third-party storage providers such as DellEMC can write and deploy plugins exposing new storage systems in Kubernetes without ever having to touch the core Kubernetes code. This gives Kubernetes users more options for storage and makes the system more secure and reliable.
CSI introduces the three important concepts PV (Persistent Volume), PVC (Persistent Volume Claim) and SC (Storage Class). These are used to provision storage dynamically to pods in Kubernetes. The process is depicted in the diagram below and consists of 3 steps:
- An admin installs the driver and defines a Storage Class which essentially maps to a certain quality of service the underlying storage provides. It is a similar concept to VMware’s VASA
- A developer creates a Persistent Volume Claim. This is the “right to use” a persistent volume. At this point an existing volume reserved for that PVC or a new one is created on the fly
- The developer deploys an app and mounts the Persistent Volume to the pods by referencing the PVC
For the purpose of this lab our Kubernetes cluster is attached to an XtremIO over iSCSI. CSI driver installations vary from vendor to vendor and between products. I personally find the CSI implementation to be one of the easiest. For instructions on how to install it you can follow:
The CSI installation installs a number of pods in the “kube-system” namespace. You can describe them if you want. Other CSI implementations like VxFlexOS create their own namespace and create their infrastructure pods there
kubectl get pod -n kube-system
The controller and node CSI pods work together to ensure that when a PVC is created a volume is created in the storage array and it is presented to Kubernetes as a PV
Now let’s examine what storage classes have been created and verify that no pv or pvc exists in our namespace
kubectl get sc
You can use “kubectl describe” to see more details about a given object such as a storage class. Notice parameters such as “ReclaimPolicy”
For this exercise I will use a small application I wrote in Python. It is simple logging application called “simplog”. I find it helps understand the concepts because the log is a plain text file, which is more accessible than other examples I have seen out there. With this app there is nothing preventing you from opening a terminal session to your pod and seeing the contents of the file. A few details about the app:
And this is the behavior if it exists. The counter increases with every hit
kubectl describe sc csi-xtremio-sc
- The app writes an entry to a log every time it gets a hit
- A folder called “data” holds the log and will mount to an XtremIO volume if it exists. “data” is a subfolder of the folder that contains the app
- If the folder is not present it simply returns “log file not found”
- The app serves a second “/dump” route that reads back all log entries
- The app is available in a containerized form in Docker Hub. So you can test it with Docker if you want as well. Find it at: https://hub.docker.com/r/cermegno/simplog
The first step is to create a PVC. The following YAML file
“simplog-pvc.yml” will create a 5GB Volume in XtremIO and make it available to
Kubernetes as a Persistent Volume
Let's "apply" this YAML file and see what it does
kubectl apply -f simplog-pvc.yml kubectl get pvc kubectl get pv
At
the beginning you might see that the status is pending as it will take a few
seconds to complete. If all goes well, you should see that a PVC and PV have
been created
If we look at the XtremIO GUI you can see that the volume has been created, the display name is the same as the PV name in Kubernetes. You can also see that it hasn’t been mapped yet.
The next thing we need to do is to deploy our application by applying the following YAML file
Let’s apply and watch the pods coming up
kubectl apply -f simplog-deploy.yml kubectl get pod
As you can see the pod has been created
Check the XtremIO GUI and verify volume has been mapped
The final step to make expose it by running this service YAML file
Let's apply it
kubectl apply -f simplog-service.yml
Now we can use browser to see app
http://<kubernetes_IP_addr>:<NodePort>
We can see what’s happening by attaching a terminal session to the pod and explore the file system
kubectl exec -it <your_pod_name> /bin/bash cat /app/data/log.txt
Now let's see how the volume' lifecycle differs from the pod's. We could try deleting the pod directly to force Kubernetes to create a new one to satisfy the “desired state” expressed in the deployment. Before deleting it make a note of the latest value of the counter that the web app displays
kubectl get pod kubectl delete pod <pod_name> kubectl get pod
Verify that the volume is still there with “kubectl” and in XtremIO
kubectl get pod kubectl get pv
Finally apply the deployment again and validate that the
counter picks up from where it left
kubectl apply -f simplog-deploy.yml
Thank you for Sharing
ReplyDeletePrancer specialize in cloud security and compliance through validation frameworks. Contact us today.
Nice blog, very nice information you have shared. Solve your moving and storage needs with our convenient and innovative portable storage units serving Brockton, MA. Simply choose a mobile storage container size that fits your needs.
ReplyDeletePortable Storage Units
Really glad to read this post. Hingham's the best storage unit provider offers portable storage unit sizes for all storage needs in Southen Massachusetts. Get A Storage Quote today.
ReplyDeletePods Westwood
Thanks for sharing this informative post, Cheap moving and storage solution provider offers a convenient and innovative solution for your moving and storage needs in Edmonton. Get A Portable Storage & Pods Quote Today.
ReplyDeletePortable storage Stony Plain
Really glad to see this post as i was looking for portable storage units from a long time, keep sharing such blogs.
ReplyDeletePods Cohassett
Thanks for sharing this blog, its really informative for sure, keep it sharing.
ReplyDeleteportable storage units Fair Oaks