Kubernetes data persistence Storage Class creates PV automatically

Keywords: Nginx vim Kubernetes yum

Through Bo Wen Storage Volume of Kubernetes It can be seen that the process of Kubernets to realize data persistence is as follows:
Build NFS underlying storage - > create PV - > create PVC - > create pod
Finally, the container in the pod will realize data persistence!

From the above process, it seems that there is no problem, but after careful study, it will be found that when PVC applies for storage space to PV, it is based on the name, access mode and capacity of the specified PV to determine which PV to apply for space.

For example, if the capacity of PV is 20G, the defined access mode is WRO (it is only allowed to mount to a single node in read-write mode), and the storage space applied for by PVC is 10G, then once the PVC is the space applied for by the above PV, that is to say, 10G of PV space is wasted, because it only allows a single node to mount. This is a very serious problem. Even if we don't consider this problem, it's a troublesome thing for us to create PV manually every time. This is to use an automated solution to create PV for us. The automation solution is Storage Class!

Storage class overview

Storage class is a kind of Kubernetes resource type. It is a logical group created by the administrator to manage PV more conveniently. It can be classified according to the performance of storage system, comprehensive service quality, backup strategy, etc. But Kubernetes doesn't know what the category is. It's a simple description!

One of the advantages of storage class is to support the dynamic creation of PV. When users use persistent storage, they do not need to create PV in advance, but directly create PVC, which is very convenient. At the same time, it also avoids the waste of space!

There are three important concepts of Storage class:
1) Provider: a storage system that provides storage resources. Kubernetes has multiple internal suppliers whose names are prefixed with "kubernetes.io". It can also be customized;
2) Parameters: the storage class uses parameters to describe the storage volume to be associated. Note that different supplier parameters are also different;
3) Reclaimpolicy: the recycling policy of pv. The available values are Delete (default) and Retain;

Next, we will learn more about the specific use of Storage Class through a case of nginx realizing data persistence based on automatically creating PV!

1) Set up NFS shared storage

For convenience, deploy NFS storage directly on the master node!

[root@master ~]# yum -y install nfs-utils rpcbind
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl start rpcbind
[root@master ~]# showmount -e
Export list for master:
/nfsdata *

2) Create rbac authorization

This way of automatically creating PV involves rbac authorization mechanism, which is not described in detail here, and then updated.

[root@master ~]# vim rbac-rolebind.yaml
kind: Namespace              #Create a namespace named Xiaojiang test
apiVersion: v1
metadata:
  name: xiaojiang-test
---
apiVersion: v1                            #Create a service account for authentication
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: xiaojiang-test
---
apiVersion: rbac.authorization.k8s.io/v1        #Create cluster rule
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: xiaojiang-test
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding                #Binding service authentication users to cluster rules
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: xiaojiang-test
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
[root@master ~]# kubectl apply -f rbac-rolebind.yaml     #Execute yaml file

3) Create NFS deployment. Resource

The role of NFS deployment: in fact, it is an NFS client. But it uses K8S's built-in NFS driver to mount the remote NFS server to the local directory (in the container); then it uses itself as the storage provider and associates with the storage class.

[root@master ~]# vim nfs-deployment.yaml  
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: xiaojiang-test
spec:
  replicas: 1                              #Specify 1 number of copies
  strategy:
    type: Recreate                      #Specify policy type as reset
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner            #Specify the authentication user account created in rbac yanl file
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner     #Images used 
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes             #Specify the directory to mount in the container
          env:
            - name: PROVISIONER_NAME           #Variables within the container are used to specify the name of the provided store
              value: lzj-test 
            - name: NFS_SERVER                      #Variables in the container are used to specify the IP address of the nfs service
              value: 192.168.1.1
            - name: NFS_PATH                       #The variable in the container specifies the directory corresponding to the nfs server
              value: /nfsdata
      volumes:                                                #Specify the path and IP address of the nfs attached to the container
        - name: nfs-client-root
          nfs:
            server: 192.168.1.1
            path: /nfsdata
[root@master ~]# kubectl apply -f nfs-deployment.yaml                   #Execute yaml file
[root@master ~]# kubectl get pod -n xiaojiang-test 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7cf975c58b-sc2qc   1/1     Running   0          6s

4) Create SC (Storage Class)

[root@master ~]# vim test-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: stateful-nfs
  namespace: xiaojiang-test
provisioner: lzj-test                  #This corresponds to the value of provider name in the env environment variable of NFS client provider.
reclaimPolicy: Retain               #Specify the recycle policy as Retain (manual release)
[root@master ~]# kubectl apply -f test-storageclass.yaml

5) Create PVC

[root@master ~]# vim test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim
  namespace: xiaojiang-test
spec:
  storageClassName: stateful-nfs              #Define the name of the storage class, which should correspond to the name of SC
  accessModes:
    - ReadWriteMany                        #Access mode is RWM
  resources:
    requests:
      storage: 100Mi
[root@master ~]# kubectl apply -f test-pvc.yaml
[root@master ~]# kubectl get pvc -n xiaojiang-test
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-claim   Bound    pvc-267b880d-5e0a-4e8e-aaff-3af46f21c6eb   100Mi      RWX            stateful-nfs   14s
#Ensure that the pvc status is Bound, indicating that the association is successful
[root@master ~]# ls /nfsdata/             #You can see that a corresponding directory is generated under the directory used for nfs storage
xiaojiang-test-test-claim-pvc-267b880d-5e0a-4e8e-aaff-3af46f21c6eb

At this point, we have realized to automatically create PV according to the application storage space of PVC (a directory has been generated under the local nfs shared directory, with a long name, which is the directory name defined by the name of pv+pvc). As for which pod this PVC application space is for, it doesn't matter!

6) Create Pod based on nginx image

[root@master ~]# vim nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myweb
  namespace: xiaojiang-test
spec:
  containers:
  - name: myweb
    image: nginx:latest
    volumeMounts:
    - name: myweb-persistent-storage
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: myweb-persistent-storage
    persistentVolumeClaim:
      claimName: test-claim                  #Specify the PVC name to use
[root@master ~]# kubectl apply -f nginx-pod.yaml            
[root@master ~]# kubectl get pod -n xiaojiang-test 
NAME                                      READY   STATUS    RESTARTS   AGE
myweb                                     1/1     Running   0          38s
nfs-client-provisioner-7cf975c58b-sc2qc   1/1     Running   0          60m

7) Test verification

[root@master ~]# kubectl exec -it myweb -n xiaojiang-test /bin/bash
root@myweb:/# cd /usr/share/nginx/html/
root@myweb:/usr/share/nginx/html# echo "hello world" > index.html
#Enter the container to insert data for testing
[root@master ~]# cat /nfsdata/xiaojiang-test-test-claim-pvc-267b880d-5e0a-4e8e-aaff-3af46f21c6eb/index.html 
hello world
#No problem with local directory test
[root@master ~]# kubectl exec -it nfs-client-provisioner-7cf975c58b-sc2qc -n xiaojiang-test /bin/sh
/ # ls nfs-client-provisioner 
nfs-client-provisioner                        #Automatic creation of pv executable
/ # cat /persistentvolumes/xiaojiang-test-test-claim-pvc-267b880d-5e0a-4e8e-aaff-3af46f21c6eb/index.html 
hello world
#The directory data corresponding to the NFS client container also exists

From the above test, we can see that: the web page directory in the nginx container, the local nfs shared directory, and the directory in the nfs client container are all associated.

————————————Thank you for reading————————————————

Posted by icd_lx on Mon, 10 Feb 2020 01:10:55 -0800