Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How to create HostPath persistent volume in Kubernetes


This article will guide you about how to create HostPath Persistent Volume in Kubernetes.How to create HostPath persistent volume


You might be knowing that data in the Pod exists till the life time of the Pod. If the Pod dies all your data that belongs to the Pod is also goes away along with Pod. So if you want to persist your data beyond the life cycle of the Pod then you must have some thing called as a Persistent Volume in Kubernetes.

So lets study how to How to create Hostpath Persistent Volume which is very easy to experiment. Also to gain knowledge about the fundamentals about the Persistent volume.

There are following types of Persistent volume types available to use within kubernetes by different vendors.

  • GCEPersistentDisk
  • AWSElasticBlockStore
  • AzureFile
  • AzureDisk
  • CSI
  • FC (Fibre Channel)
  • FlexVolume
  • Flocker
  • NFS
  • iSCSI
  • RBD (Ceph Block Device)
  • CephFS
  • Cinder (OpenStack block storage)
  • Glusterfs
  • VsphereVolume
  • Quobyte Volumes
  • HostPath (Single node testing only — local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
  • Portworx Volumes
  • ScaleIO Volumes
  • StorageOS

As you can see for the HostPath it should be used only for the testing purpose. Also it does not support multi-node cluster. In case you want to explore more about the Persistent volumes you may follow this link.

The Basic process for Persistent volumes is as follows:

  1. K8s admin create the persistence volume in cluster.
  2. User will claim it using Persistent volume claim once they claimed it status becomes “Bound”.
  3. Then Pod use that volume for storing out the data which will persist across the life-cycle of Pod.

Enough for the theory Part Lets jump the Technical steps towards it:

  • Create the persistent volume

In this step we are using following manifest yaml file to achieve the same.

# cat hostpath-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-hostpath
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/kube"

As shown in the above definition file it is for the size 1GB. Path is “/tmp/kube”. Lets create the PV as below:

# kubectl create -f hostpath-pv.yaml
persistentvolume/pv-hostpath created

Recheck the PV and persistent volume claim using below command:

# kubectl get pv,pvc -o wide
NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE   VOLUMEMODE
persistentvolume/pv-hostpath   1Gi        RWO            Retain           Available           manual                  6s    Filesystem

As you can see the PV is created having status as Available and since we haven’t specified the reclaim policy default is applied which is “Retain” meaning that the even if the pvc (Persistent volume claim) gets deleted the PV and data wont get deleted automatically. We will test it out that also in a bit.

  • Create the Persistent volume claim

In order to use the PV we need to create the Persistent volume claim or pvc to use it.  Here is the manifest yaml file for the same.

# cat pvc-hostpath.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-hostpath
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

Kindly note in the above definition that the claim is only for the 100mb(>= size of PV) also the Access mode is “ReadWriteOnce” which is same as that of PV. Hence we can able to create the PVC as below:

# kubectl create -f pvc-hostpath.yaml
persistentvolumeclaim/pvc-hostpath created

Check the status of pv and pvc.

# kubectl get pv,pvc -o wide
NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE   VOLUMEMODE
persistentvolume/pv-hostpath   1Gi        RWO            Retain           Bound    default/pvc-hostpath   manual                  20s   Filesystem

NAME                                 STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
persistentvolumeclaim/pvc-hostpath   Bound    pv-hostpath   1Gi        RWO            manual         4s    Filesystem

You will see that the status of the pv becomes Bound from Available which was earlier.

  • Create the Pod to utilize this PV as a mount point inside it.
# cat busybox-pv-hostpath.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  volumes:
  - name: host-volume
    persistentVolumeClaim:
      claimName: pvc-hostpath
  containers:
  - image: busybox
    name: busybox
    command: ["/bin/sh"]
    args: ["-c", "sleep 600"]
    volumeMounts:
    - name: host-volume
      mountPath: /tmp/mydata

As describe in the Pod definition file it will create the mount point /tmp/mydata inside the Pod. Lets create the Pod using above definition file.

# kubectl create -f busybox-pv-hostpath.yaml
pod/busybox created

Check the status and inspect the Pod:

# kubectl get all -o wide
NAME          READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
pod/busybox   1/1     Running   0          2m4s   10.244.1.114   kworker01   

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1            443/TCP   35d   

# kubectl describe pod busybox
Name:         busybox
Namespace:    default
Priority:     0
Node:         kworker01/10.253.121.32
Start Time:   Mon, 06 Jul 2020 02:43:16 -0400
Labels:       
Annotations:  
Status:       Running
IP:           10.244.1.114
IPs:
  IP:  10.244.1.114
Containers:
  busybox:
    Container ID:  docker://6d1cfa9b6440efe2770244d1edc6a78c0dd7649bbf905121e70a013ad3b1dd1e
    Image:         busybox
    Image ID:      docker-pullable://[email protected]:9ddee63a712cea977267342e8750ecbc60d3aab25f04ceacfa795e6fce341793
    Port:          
    Host Port:     
    Command:
      /bin/sh
    Args:
      -c
      sleep 600
    State:          Running
      Started:      Mon, 06 Jul 2020 02:43:25 -0400
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /tmp/mydata from host-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-49xz2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  host-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-hostpath
    ReadOnly:   false
  default-token-49xz2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-49xz2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From                Message
  ----    ------     ----       ----                -------
  Normal  Scheduled    default-scheduler   Successfully assigned default/busybox to kworker01
  Normal  Pulling    64s        kubelet, kworker01  Pulling image "busybox"
  Normal  Pulled     58s        kubelet, kworker01  Successfully pulled image "busybox"
  Normal  Created    58s        kubelet, kworker01  Created container busybox
  Normal  Started    57s        kubelet, kworker01  Started container busybox

In the describe output you can see that, /tmp/mydata volume got created using host-volume from the claim pvc-hostpath. Also the Pod is scheduled/create on the node “kworker01”.

Lets login inside the Pod to create the sample file. In order to demonstrate the life cycle of the data even if the Pod dies.

# kubectl exec -it busybox -- sh
/ # hostname
busybox
/ # cd /tmp/
/tmp # ls
mydata
/tmp # cd mydata/
/tmp/mydata # echo "hello from K8S" > Hello.txt
/tmp/mydata # ls -ltr
total 4
-rw-r--r--    1 root     root            15 Jul  6 06:46 Hello.txt
/tmp/mydata #

In the above demo we have created file “Hello.txt” inside /tmp/mydata. Now lets delete the Pod.

# kubectl delete pod busybox
pod "busybox" deleted
[email protected]:~/kubernetes/yamls# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1            443/TCP   35d

Pod got deleted successfully lets login to the node “kworker01” where Pod got scheduled earlier to check if the data still persist after the deletion of the Pod.

sh-4.2# hostname
kworker01
sh-4.2# cd /tmp
sh-4.2# ls
kube
sh-4.2# cd kube/
sh-4.2# ls
Hello.txt
sh-4.2# cat Hello.txt
hello from K8S
sh-4.2# exit

You can see that our file “Hello.txt” still exists on the Node even the Pod dies.

So this is all about “How to create HostPath persistent volume” in Kubernetes.

The post How to create HostPath persistent volume in Kubernetes appeared first on UX Techno.



This post first appeared on UxTechno, please read the originial post: here

Share the post

How to create HostPath persistent volume in Kubernetes

×

Subscribe to Uxtechno

Get updates delivered right to your inbox!

Thank you for your subscription

×