Automatic creation of pv for K8S data persistence

Keywords: Nginx vim Kubernetes less

1. Types of data persistence:
1.emptyDir: Can only be used as temporary storage.If the container is deleted, the data still exists, and if the Pod is deleted, the data is deleted.
2.HostPath: Less scenarios increase the coupling between Pod and nodes.
3.PV, PVC: based on NFS services.The PV state must be Available, the access mode must be the same, and the name of the storage class must be the same.
Error: Pod keeps restarting:
1. The swap was not shut down, causing the cluster to malfunction.
2. Insufficient memory will cause the running service to restart.

Note: Create PV based on NFs, with PVC.
And you need to create the host directory needed for the PV.

2. Make an experimental analogy, if there are two PVs with different space sizes in the cluster, how PVC relates to PV.
1. Create a PV (create two PVs of different space sizes, web-pv 1 and web-pv2)
1.[root@master yaml]# vim web1.yaml   
2.  
3.kind: PersistentVolume  
4.apiVersion: v1  
5.metadata:  
6.  name: web-pv1  
7.spec:  
8.  accessModes:  
9.    - ReadWriteOnce          
10.  capacity:  
11.    storage: 1Gi  
12.  persistentVolumeReclaimPolicy: Recycle  
13.  storageClassName: nfs  
14.  nfs:  
15.    path: /nfsdata/web1  
16.    server: 192.168.1.1  
17.  
18.[root@master yaml]# mkdir /nfsdata/web1  
19.[root@master yaml]# kubectl apply -f web1.yaml   
20.persistentvolume/web-pv1 created  
21.[root@master yaml]# kubectl get pv  
22.NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE  
23.web-pv1   1Gi        RWO            Recycle          Available                     nfs                     7s  
2.Create a second Pv
1.[root@master yaml]# vim web2.yaml   
2.kind: PersistentVolume  
3.apiVersion: v1  
4.metadata:  
5.  name: web-pv2  
6.spec:  
7.  accessModes:  
8.    - ReadWriteOnce  
9.  capacity:  
10.    storage: 2Gi  
11.  persistentVolumeReclaimPolicy: Recycle  
12.  storageClassName: nfs  
13.  nfs:  
14.    path: /nfsdata/web2  
15.    server: 192.168.1.1
16.  
17.[root@master yaml]# kubectl apply -f web2.yaml   
18.persistentvolume/web-pv2 created  
19.  
20.[root@master yaml]# kubectl get pv  
21.NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE  
22.web-pv1   1Gi        RWO            Recycle          Available                     nfs                     103s  
23.web-pv2   2Gi        RWO            Recycle          Available                     nfs                     14s  

3. Create PVC br/>1.[root@master yaml]# vim web-pvc.yaml   
2.kind: PersistentVolumeClaim  
br/>3.apiVersion: v1  
4.metadata:  
5.  name: web-pvc  
6.spec:  
7.  accessModes:  
8.    - ReadWriteOnce  
9.  resources:  
10.    requests:  
11.      storage: 1Gi  
12.  storageClassName: nfs  
13.[root@master yaml]# kubectl apply -f web-pvc.yaml   
14.persistentvolumeclaim/web-pvc created  
br/>15.[root@master yaml]# kubectl get pvc  
16.NAME      STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE  
br/>17.web-pvc   Bound    web-pv1   1Gi        RWO            nfs            5s  
18.[root@master yaml]# kubectl get pv  
19.NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM             STORAGECLASS   REASON   AGE  
20.web-pv1   1Gi        RWO            Recycle          Bound       default/web-pvc   nfs                     8m59s  
21.web-pv2   2Gi        RWO            Recycle          Available                     nfs                     7m30s 
From the above experiment:
If there are many similar PVs in K8s cluster, PVC will not only consider name and access control mode, but also the size of PVC application space and allocate the most suitable size of PV when requesting space from PV.

3. Analyse the function and advantages of storageclass resource object.
1. Data persistence can be achieved by using PV and PVC. If our PV capacity is 10G, access mode is defined as RWO, and the storage space of our PVC application is 5G, the requested PV capacity will be wasted because access mode can only be mounted by a single node.Also, it is cumbersome for us to create PVs every time, so there is a Storage Class needed for dynamic and automatic creation.
Storage Class: Storage Class is a kind of K8s resource type. It is a logical group that administrators create to manage PV more easily. It can be classified according to the performance of storage system, or integrated quality of service, backup policy and so on.But k8s does not know what the category is, it is used as a description.
Advantages: Dynamic PV creation is supported. When users use persistent storage, they do not need to create PV in advance, but directly create PVC. It is very convenient.
The name of the storage class object is important, and in addition to the name, there are three key fields:
Provisioner: A storage system that provides storage resources.k8s has built-in multiple providers whose names are prefixed with "kubernetes.io".It can also be customized.
Parameters: Storage classes use parameters to describe the volume to which they are associated, noting that different supplier parameters vary.
ReclaimPlicy:The recycling strategy for PV.
More about Storage Class:
https://www.kubernetes.org.cn/pvpvcstorageclass

2. Do an experiment to create a PV automatically:
Run a web service based on nginx, using the Deployment resource object, replicas=3. Persist the storage directory as the default home directory, and use the storage class to automatically create the PV.

Based on NFS:

1)First NFS Opening of services:
1.[root@master yaml]# yum install -y nfs-utils rpcbind  #Note here that all three install NFS services.
2.[root@master yaml]# vim /etc/exports  
3./nfsdata  *(rw,sync,no_root_squash)  
4.[root@master yaml]# mkdir /nfsdata  
5.[root@master yaml]# systemctl start rpcbind  
6.[root@master yaml]# systemctl start nfs-server.service   
7.[root@master yaml]# showmount -e  
8.Export list for master:  
9./nfsdata *  
2)Establish RBAC To grant authorization:
1.[root@master yaml]# vim rbac-rolebind.yaml      #To give SC resources permission to operate on the K8s cluster.*
2.  
3.kind: Namespace  
4.apiVersion: v1  
5.metadata:  
6.  name: lbs-test  
7.---  
8.apiVersion: v1  
9.kind: ServiceAccount        #Create an Rbac authorized user.And define permissions.*
10.metadata:  
11.  name: nfs-provisioner  
12.  namespace: lbs-test  
13.---  
14.apiVersion: rbac.authorization.k8s.io/v1  
15.kind: ClusterRole  
16.metadata:  
17.  name: nfs-provisioner-runner  
18.  namespace: lbs-test  
19.rules:  
20.   -  apiGroups: [""]  
21.      resources: ["persistentvolumes"]  
22.      verbs: ["get", "list", "watch", "create", "delete"]  
23.   -  apiGroups: [""]  
24.      resources: ["persistentvolumeclaims"]  
25.      verbs: ["get", "list", "watch", "update"]  
26.   -  apiGroups: ["storage.k8s.io"]  
27.      resources: ["storageclasses"]  
28.      verbs: ["get", "list", "watch"]  
29.   -  apiGroups: [""]  
30.      resources: ["events"]  
31.      verbs: ["watch", "create", "update", "patch"]  
32.   -  apiGroups: [""]  
33.      resources: ["services", "endpoints"]  
34.      verbs: ["get","create","list", "watch","update"]  
35.   -  apiGroups: ["extensions"]  
36.      resources: ["podsecuritypolicies"]  
37.      resourceNames: ["nfs-provisioner"]  
38.      verbs: ["use"]  
39.---  
40.kind: ClusterRoleBinding  
41.apiVersion: rbac.authorization.k8s.io/v1  
42.metadata:  
43.  name: run-nfs-provisioner  
44.subjects:  
45.  - kind: ServiceAccount  
46.    name: nfs-provisioner  
47.    namespace: lbs-test  
48.roleRef:  
49.  kind: ClusterRole  
50.  name: nfs-provisioner-runner  
51.  apiGroup: rbac.authorization.k8s.io 

Execute yaml file:

1.[root@master yaml]# kubectl apply -f rbac-rolebind.yaml  
namespace/lbh-test created  
serviceaccount/nfs-provisioner created  
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created  
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created  
3)Establish nfs-client-provisioner Container:
1.[root@master yaml]# vim nfs-deployment.yaml  
2.  
3.apiVersion: extensions/v1beta1  
4.kind: Deployment  
5.metadata:  
6.  name: nfs-client-provisioner  
7.  namespace: lbs-test  
8.spec:  
9.  replicas: 1       #Number of copies is 1
10.  strategy:  
11.    type: Recreate  
12.  template:  
13.    metadata:  
14.      labels:  
15.        app: nfs-client-provisioner  
16.    spec:  
17.      serviceAccount: nfs-provisioner       #Specify Account
18.      containers:  
19.        - name: nfs-client-provisioner  
20.          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner      #The mirror used.*
21.          volumeMounts:  
22.            - name: nfs-client-root  
23.              mountPath:  /persistentvolumes        #Specify the mount directory within the container
24.          env:  
25.            - name: PROVISIONER_NAME        #This is the built-in variable for this container.
26.              value: lbs-test              #This is the value (name) of the variable above.
27.            - name: NFS_SERVER              #Built-in variable to specify the IP of the nfs service
28.              value: 192.168.2.50             
29.            - name: NFS_PATH                #Built-in variable, specified as nfs shared directory
30.              value: /nfsdata  
31.      volumes:              #Here is the path and IP of the nfs mounted above the container.
32.        - name: nfs-client-root  
33.          nfs:  
34.            server: 192.168.2.50  
35.            path: /nfsdata  

NFS-deployment:
What it does: It's actually an NFS client.It mounts remote NFS servers to the local directory through K8S's built-in NFS driver; it then associates itself as a storage provider with the storage class.

Execute yaml file: br/>1.[root@master yaml]# kubectl apply -f nfs-deployment.yaml   
deployment.extensions/nfs-client-provisioner created

4)Establish SC(Storage Class)Automatic Creation pv
1.[root@master yaml]# vim test-storageclass.yaml  
2.apiVersion: storage.k8s.io/v1  
3.kind: StorageClass  
4.metadata:  
5.  name: sc-nfs  
6.  namespace: lbs-test      #Namespace
7.provisioner: lbs-test      #This corresponds to the value in the env environment variable for deployment.*
8.reclaimPolicy: Retain       #The recycling strategy is retain.*

//Execute yaml file:
1.[root@master yaml]# kubectl apply -f test-storageclass.yaml   
storageclass.storage.k8s.io/sc-nfs created  
5)Establish PVC
1.[root@master yaml]# vim test-pvc.yaml  
2.  
3.apiVersion: v1  
4.kind: PersistentVolumeClaim  
5.metadata:  
6.  name: lbs-claim  
7.  namespace: lbs-test  
8.  
9.spec:  
10.  storageClassName: sc-nfs   Need to be associated with storageclass The same name
11.  accessModes:  
12.    - ReadWriteMany  
13.  resources:  
14.    requests:  
15.      storage: 500Mi  
When we finish creating PVC After that, one will be created automatically PV,Its purpose is to NFS Under the shared directory:
1.[root@master yaml]# ls /nfsdata/  
lbs-test-lbs-claim-pvc-71262c5a-f866-4bc6-a22f-cd49daf13edf  
  
2.[root@master yaml]# kubectl get pv -n lbs-test   
3.NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                  STORAGECLASS   REASON   AGE  
pvc-71262c5a-f866-4bc6-a22f-cd49daf13edf   500Mi      RWX            Delete           Bound         lbs-test/lbs-claim    sc-nfs                  26m  
6)** Run the web service based on nginx, using Deployment resource objects, three copies, and the persistent storage directory as the default home directory.Change the contents of the default home directory to your own name, and verify that there is data in the PV directory where the data is automatically created.**
Create Deployment Resource:
1.[root@master yaml]# vim nginx.yaml  
2.  
3.apiVersion: extensions/v1beta1  
4.kind: Deployment  
5.metadata:  
6.  name: lbs-web  
7.  namespace: lbs-test  
8.spec:  
9.  replicas: 3  
10.  template:  
11.    metadata:  
12.      labels:  
13.        app: web  
14.    spec:  
15.      containers:  
16.      - name: nginx  
17.        image: nginx  
18.        volumeMounts:  
19.        - name: lbs-web  
20.          mountPath: /usr/share/nginx/html/   
21.      volumes:  
22.      - name: lbs-web  
23.        persistentVolumeClaim:  
24.          claimName: lbs-claim  
Execute the yaml file and view the Pod:
1.[root@master yaml]# kubectl apply -f nginx.yaml   
2.deployment.extensions/lbh-web created  
3.  
4.[root@master yaml]# kubectl get pod -n lbs-test   
5.NAME                                      READY   STATUS    RESTARTS   AGE  
6.lbs-web-6d596b6666-68wls                  1/1     Running   0          2m29s  
7.lbs-web-6d596b6666-k8vz2                  1/1     Running   0          2m29s  
8.lbs-web-6d596b6666-pvppq                  1/1     Running   0          2m29s  
Enter containers to configure the page root directory:
1.[root@master yaml]# kubectl exec -it -n lbs-test lbs-web-6d596b6666-68wls /bin/bash  
2.root@lbs-web-6d596b6666-68wls:/# cd /usr/share/nginx/html/  
3.root@lbs-web-6d596b6666-68wls:/usr/share/nginx/html# echo 123 > index.html  
4.root@lbs-web-6d596b6666-68wls:/usr/share/nginx/html# ls  
5.index.html  
6.root@lbs-web-6d596b6666-68wls:/usr/share/nginx/html# exit  

The other two steps are the same.

Check if there is data in the Auto-Create Score PV directory:
1.[root@master yaml]# cat /nfsdata/lbs-test-lbs-claim-pvc-71262c5a-f866-4bc6-a22f-cd49daf13edf/index.html   
2.123  
Access Web Test:
1.[root@master yaml]# kubectl get pod -o wide -n lbs-test   
2.NAME                                      READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES  
3.lbs-web-6d596b6666-68wls                  1/1     Running   0          11m     10.244.2.7   node02   <none>           <none>  
4.lbs-web-6d596b6666-k8vz2                  1/1     Running   0          11m     10.244.2.9   node02   <none>           <none>  
5.lbs-web-6d596b6666-pvppq                  1/1     Running   0          11m     10.244.2.8   node02   <none>           <none>  
6.[root@master yaml]# curl 10.244.2.7     
7.123  

The page root directory in the nginx container is associated with the local nfs shared directory.
Data will not be lost because the container, pod, is deleted.

Posted by rdoylelmt on Sun, 16 Feb 2020 15:26:03 -0800