k8s, persistent storage PV and PVC, static supply of PV/PVC nginx case, PV&&PVC application in mysql persistent storage actual project

Keywords: Docker Kubernetes Operating System Container

1. Introduction of PV and PVC

Volume provides a very good data persistence scheme, but there are still deficiencies in manageability.
Take the previous example of AWS EBS. To use Volume, Pod must know the following information in advance:
The current Volume is from AWS EBS.
The EBS Volume has been created in advance and the exact volume ID is known.
The Pod is usually maintained by the application developer, while the Volume is usually maintained by the storage system administrator. Developers need to get the above information:
Or ask the administrator.
Or you're the administrator.
This brings a management problem: the responsibilities of application developers and system administrators are coupled. If the system scale is small or for the development environment, it is acceptable. However, when the cluster size becomes larger, especially for the generation environment, considering efficiency and security, this has become a problem that must be solved.

2. Persistent storage through NFS

2.1 configuring nfs

k8s-master nfs-server

k8s-node1 k8s-node2 nfs-client

Install nfs on all nodes

yum install -y nfs-common nfs-utils 

Create a shared directory on the master node

[root@k8s-master k8s]# mkdir /nfsdata

Authorized shared directory

[root@k8s-master k8s]# chmod 666 /nfsdata

Edit exports file

[root@k8s-master k8s]# cat /etc/exports
/nfsdata *(rw,no_root_squash,no_all_squash,sync)

Configuration effective

Start rpc and nfs (note the order)

[root@k8s-master k8s]# systemctl start rpcbind
[root@k8s-master k8s]# systemctl start nfs

As a preparatory work, we have built an NFS server on the k8s master node. The directory is / nfsdata:

2.2 create PV

The three configuration files can be put together. One way is to delete pod and pvc, and the files in the shared directory will be deleted
Next, create a PV mypv1. The configuration file nfs-pv1.yml is as follows:

[root@k8s-master ~]# vim nfs-pv1.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /nfsdata
    server: 192.168.153.148  #Specify the address of the machine where the nfs directory is located

Detailed explanation of meaning

① `capacity` appoint PV The capacity of is 1 G. 

② `accessModes` Specify the access mode as `ReadWriteOnce`,The supported access modes are:
ReadWriteOnce – PV Can read-write pattern mount To a single node.
ReadOnlyMany – PV Can read-only pattern mount To multiple nodes.
ReadWriteMany – PV Can read-write pattern mount To multiple nodes.

③ `persistentVolumeReclaimPolicy` Specify when PV The recycling strategy is `Recycle`,Supported strategies are:
Retain – The administrator needs to recycle manually.
Recycle – eliminate PV The effect is equivalent to execution `rm -rf /nfsdata/*`. 
Delete – delete Storage Provider Corresponding storage resources on, for example AWS EBS,GCE PD,Azure Disk,OpenStack Cinder Volume Wait.

④ `storageClassName` appoint PV of class by `nfs`. Equivalent to PV Set up a category, PVC Can specify class Apply accordingly class of PV. 

⑤ appoint PV stay NFS The corresponding directory on the server.

Create mypv1:

[root@k8s-master ~]# kubectl apply -f nfs-pv1.yml


STATUS is Available, indicating that mypv1 is ready and can be applied by PVC.

2.3 creating PVC

Next, create PVC mypvc1. The configuration file nfs-pvc1.yml is as follows:

[root@k8s-master ~]# cat nfs-pvc1.yml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs


PVC is very simple. You only need to specify the capacity, access mode and class of PV.

Execute the command to create mypvc1:

[root@k8s-master ~]# kubectl apply -f nfs-pvc1.yml


From the output of kubectl get pvc and kubectl get pv, you can see that mypvc1 has Bound to mypv1 and the application is successful.

2.4 creating a pod

pv and pvc have been created above. You can directly use this pvc in pod

[root@k8s-master ~]# cat pod1.yml 
apiVersion: v1
kind: Pod
metadata:
  name: mypod1
spec:
  containers:
    - name: mypod1
      image: busybox
  
      volumeMounts:
      - mountPath: "/mydata"
        name: mydata
  volumes:
    - name: mydata
      persistentVolumeClaim:
        claimName: mypvc1


Similar to the format of using a normal Volume, specify the Volume requested using mypvc1 through persistentVolumeClaim in volumes.

Create mypod1 with the command:

[root@k8s-master ~]# kubectl apply -f pod1.yml

You can write it in another file

2.5 verification

[root@k8s-master ~]# kubectl exec -it mypod1 /bin/sh
/ # ls mydata/
/ # echo "youngfit" > mydata/hello.txt
/ # ls mydata/
hello.txt
/ # exit
[root@k8s-master ~]# ls /nfsdata/    #It can also be viewed in the shared directory of nfs, indicating that the volume sharing is successful
hello.txt
[root@k8s-master ~]# cat /nfsdata/hello.txt 
youngfit
 Visible, in Pod Files created in /mydata/hello.txt It has indeed been saved to NFS Server directory /nfsdata Yes.
If no longer needed PV,Available delete PVC recovery PV. 

Here, you can try to delete the file on either side, and the file will disappear at both ends;

3. Recovery of PV

When pv is no longer needed, it can be recycled by deleting pvc. Before deleting pvc, the status of pv is Bound

Delete pod

[root@k8s-master pvc]# kubectl delete pod mypod1

Delete pvc

[root@k8s-master pvc]# kubectl delete pvc mypvc1

Check the status of pv again

[root@k8s-master pvc]# kubectl get pv

After deleting PVC, the status of pv becomes Available. At this time, it can be applied for by new PVC after unbinding.
/The files in the nfsdata file were deleted

Because the recycling policy of PV is set to Recycle, the data will be cleared,

But this may not be the result we want. If we want to Retain data, we can set the policy to Retain

[root@k8s-master pvc]# vim nfs-pv1.yml

[root@k8s-master pvc]# kubectl apply -f nfs-pv1.yml


The recycling policy has been changed to Retain. Verify its effect through the following steps:

Recreate mypvc1
[root@k8s-master pvc]# kubectl apply -f nfs-pvc1.yml
 Recreate pod,quote mypvc1
[root@k8s-master pvc]# kubectl apply -f pod1.yml
 get into pod In, create a file
[root@k8s-master pvc]# kubectl exec -it mypod1 /bin/sh
/ # echo 'youngfit' > mydata/hello.txt
/ # ls mydata/
hello.txt
/ # exit

stay nfs Inspection under catalog
[root@k8s-master pvc]# ls /nfsdata/
hello.txt
[root@k8s-master pvc]# cat /nfsdata/hello.txt 
youngfit

delete pod
[root@k8s-master pvc]# kubectl delete -f pod1.yml 
pod "mypod1" deleted
[root@k8s-master pvc]# ls /nfsdata/
hello.txt
 delete pvc(mypvc1)
[root@k8s-master pvc]# kubectl delete pvc mypvc1
persistentvolumeclaim "mypvc1" deleted
[root@k8s-master pvc]# ls /nfsdata/
hello.txt
[root@k8s-master pvc]# cat /nfsdata/hello.txt 
youngfit
 Found that the data is still retained

Although the data in mypv1 is retained, its PV status will always be Released and cannot be applied by other PVC. In order to reuse storage resources, you can delete and recreate mypv1. The delete operation only deletes the PV object, and the data in the storage space will not be deleted.

[root@k8s-master pvc]# ls /nfsdata/
hello.txt
[root@k8s-master pvc]# kubectl delete pv mypv1
persistentvolume "mypv1" deleted
[root@k8s-master pvc]# ls /nfsdata/
hello.txt
[root@k8s-master pvc]# kubectl apply -f nfs-pv1.yml 
persistentvolume/mypv1 created
[root@k8s-master pvc]# kubectl get pod
No resources found in default namespace.
[root@k8s-master pvc]# kubectl get pv


The newly created mypv1 is Available and can be applied by PVC.

PV also supports the recycle policy of Delete, which will Delete the storage space corresponding to PV on the Storage Provider. NFS PV does not support Delete. Providers that support Delete include AWS EBS, GCE PD, Azure Disk, OpenStack Cinder Volume, etc.

4. Static supply nginx case of PV / PVC

The error I encountered here is that the previous shared directory has not been changed,
All nodes download nfs

yum install -y nfs-common nfs-utils 

The master node acts as the nfs server

[root@k8s-master k8s]# cat /etc/exports
/data/opv *(rw,no_root_squash,no_all_squash,sync)


[root@k8s-master k8s]# mkdir /data/opv
[root@k8s-master k8s]# chmod 777 -R /data/opv

Restart it

systemctl restart nfs rpcbind

master node operation

#1. Definition pv
[root@k8s-master pvc2]# cat pv-pod.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /data/opv  #Directory shared by nfs server
    server: 192.168.153.148   #Address of nfs server
[root@k8s-master pvc2]# kubectl apply -f pv-pod.yaml

Define pvc and deployment

[root@k8s-master pvc2]# cat pvc-pod.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: daocloud.io/library/nginx
        #The name of the enabled data volume is wwwroot, and it is mounted in the html directory of nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
    #Define the name of the data volume as wwwroot and the type as pvc
      volumes:
      - name: wwwroot
        persistentVolumeClaim:
          claimName: my-pvc

Define the data source of pvc and match pv according to the capacity

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  #Corresponding to the above name
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5G
 [root@k8s-master pvc2]# kubectl apply -f pvc-pod.yaml

3. Expose the port

You can expose it here or not

[root@k8s-master pvc2]# cat pv-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: pv-svc
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30001
      targetPort: 80
  selector:   #selector
    app: nginx

#4.nfs server operation

[root@k8s-master pvc2]# echo youngfit >> /data/opv/index.html 

#5. Visit and see the effect



PV & & PVC application in mysql persistent storage project

The following shows how to provide persistent storage for MySQL database. The steps are as follows:

  1. Create PV and PVC.
  2. Deploy MySQL.
  3. Add data to MySQL.
  4. Simulate node downtime, and Kubernetes will automatically migrate MySQL to other nodes.
  5. Verify data consistency.

First create PV and PVC with the following configuration:
mysql-pv.yml
All nodes download nfs

yum install -y nfs-common nfs-utils 

The master node acts as the nfs server

[root@k8s-master k8s]# cat /etc/exports
/nfsdata/mysql-pv*(rw,no_root_squash,no_all_squash,sync)


[root@k8s-master k8s]# mkdir /nfsdata/mysql-pv
[root@k8s-master k8s]# chmod 777 -R /nfsdata/mysql-pv

Restart it

systemctl restart nfs rpcbind

Create pv

[root@k8s-master mysqlpv]# cat mysql-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfsdata/mysql-pv
    server: 192.168.153.148
[root@k8s-master mysqlpv]# kubectl apply -f mysqlpv.yml

Create pvc

[root@k8s-master mysqlpv]# cat mysql-pvc.yml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  
[root@k8s-master mysqlpv]# kubectl apply -f mysql-pvc.yml

I put them all together

Create pod

Next, deploy MySQL. The configuration file is as follows:

[root@k8s-master mysqlpv]# cat mysqlpod.yml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: daocloud.io/library/mysql:5.7.5-m15 #The image here must be selected correctly to ensure that it can be pulled, and variables can be used
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pvc
          
[root@k8s-master mysqlpv]# kubectl apply -f mysqlpod.yml


PVC MySQL PVC bound PV MySQL PV will be mount ed to MySQL's data directory / var/lib/mysql.

MySQL is deployed to k8s-node1

test

① Switch to mysql database.

② Create database table my_id.

③ Insert a piece of data.

④ Confirm that the data has been written.

Shut down k8s-node1 to simulate node downtime.

[root@k8s-master mysqlpv]# kubectl exec -it mysql-6654fcb867-mqtcl /bin/bash
root@mysql-6654fcb867-mqtcl:/# mysql -uroot -p'password'
mysql> create database feige;
mysql> create table feige.t1(id int);
mysql> insert into feige.t1 values(2);

Analog node failure

[root@k8s-node1 ~]# poweroff

Verify data consistency:

Since node1 node has been down, node2 node has taken over the task and the pod transfer needs to wait for some time. I waited here for about 5 minutes..

Enter a new pod In, the data still exists and the persistence is successful. Very safe
[root@k8s-master mysqlpv]# kubectl exec -it mysql-6654fcb867-mqtcl /bin/bash
root@mysql-6654fcb867-mqtcl:/# mysql -uroot -p'password'
mysql> select * from feige.t1;
+------+
| id   |
+------+
|    1 |
|    2 |
+------+
2 rows in set (0.01 sec)

The MySQL service is restored and the data is intact.

Posted by FlipinMonkeyPie on Wed, 01 Dec 2021 15:27:12 -0800