Environmental description:
host name | Operating System Version | ip | docker version | kubelet version | To configure | Remarks |
---|---|---|---|---|---|---|
master | Centos 7.6.1810 | 172.27.9.131 | Docker 18.09.6 | V1.14.2 | 2C2G | master host |
node01 | Centos 7.6.1810 | 172.27.9.135 | Docker 18.09.6 | V1.14.2 | 2C2G | Node node |
node02 | Centos 7.6.1810 | 172.27.9.136 | Docker 18.09.6 | V1.14.2 | 2C2G | Node node |
centos7 | Centos 7.3.1611 | 172.27.9.181 | × | × | 1C1G | nfs server |
The deployment of k8s cluster is detailed in: Centos 7.6 Deploys k8s(v1.14.2) Cluster
See k8s Learning Materials for more details: Basic concepts, kubectl commands and data sharing
Volume
1. Concept
The volume of_Kubernetes is a component of pod, so it is defined in the specification of pod like a container. They are not separate Kubernetes objects, nor can they be created or deleted separately. Volumes can be used by all containers in pod, but they must be mounted in each container that needs to access them first. In each container, volumes can be mounted anywhere on its file system.
2. Why do you need Volume?
_The life cycle of files on container disks is short, which causes some problems when running important applications in containers. First, when the container crashes, kubelet restarts it, but the files in the container will be lost - the container restarts in a clean state (mirroring the original state). Secondly, when multiple containers are running simultaneously in Pod, files need to be shared between these containers. The Volume abstraction in Kubernetes solves these problems very well.
3. Volume type
Currently, Kubernetes supports the following Volume types:
This paper will test emptyDir,hostPath, shared storage NFS,PV and PVC respectively.
2. emptyDir
1. emptyDir concept
_emptyDir is the most basic type of Volume, a simple empty directory for storing temporary data. If Pod sets the emptyDir type Volume, when Pod is assigned to Node, emptyDir will be created. As long as Pod runs on Node, emptyDir will exist (container hanging will not cause emptyDir to lose data), but if Pod is deleted from Node (Pod is deleted, or Pod migrates), emptyDir will also be deleted. And lost forever.
_Next, file sharing between two containers in the same pod will be achieved using emptyDir volumes
2. Create pod emptyDir-fortune
[root@master ~]# more emptyDir-pod.yaml apiVersion: v1 kind: Pod metadata: labels: app: prod #pod tag name: emptydir-fortune spec: containers: - image: loong576/fortune name: html-generator volumeMounts: #Volumes named html are mounted in the container's / var/htdocs directory - name: html mountPath: /var/htdocs - image: nginx:alpine name: web-server volumeMounts: #Mount the same volume to the container / usr/share/nginx/html directory and set it to read-only - name: html mountPath: /usr/share/nginx/html readOnly: true ports: - containerPort: 80 protocol: TCP volumes: - name: html #The emptyDir volume named html is mounted to both containers at the same time emptyDir: {} [root@master ~]# kubectl apply -f emptyDir-pod.yaml pod/emptydir-fortune created [root@master ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES emptydir-fortune 2/2 Running 0 9s 10.244.2.140 node02 <none> <none>
Create a pod emptydir-fortune, which has two containers and mounts the emptyDir volume. The container html-generator writes random content to the volume and verifies file sharing by accessing the container web-server.
2.1 loong576/fortune mirror
root@master ~]# more Dockerfile
[root@master ~]# more fortune/Dockerfile FROM ubuntu:latest RUN apt-get update ; apt-get -y install fortune ADD fortuneloop.sh /bin/fortuneloop.sh E*TRYPOINT /bin/fortuneloop.sh
The base image of the mirror is ubuntu, and the fortuneloop.sh script will be executed when the mirror starts.
The fortuneloop.sh script:
[root@master ~]# more fortuneloop.sh #!/bin/bash trap "exit" SIGINT mkdir /var/htdocs while : do echo $(date) Writing fortune to /var/htdocs/index.html /usr/games/fortune > /var/htdocs/index.html sleep 10 done
The script principally outputs random phrases to the index.html file every 10 seconds.
3. Visit nginx
3.1 Create service
[root@master ~]# more service-fortune.yaml apiVersion: v1 kind: Service metadata: name: my-service #service name spec: type: NodePort selector: app: prod #The pod tag, thus locating the pod emptydir-fortune ports: - protocol: TCP nodePort: 30002 #Node monitor port, expose static port 30002 to provide service port: 8881 #Ports for ClusterIP listening targetPort: 80 #Container port sessionAffinity: ClientIP #Whether Session is supported or not, access requests from the same client are forwarded to the same back-end Pod [root@master ~]# kubectl apply -f service-fortune.yaml service/my-service created [root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d17h my-service NodePort 10.102.191.57 <none> 8881:30002/TCP 9s
3.2 nginx access
[root@master ~]# curl 10.102.191.57:8881 Writing is easy; all you do is sit staring at the blank sheet of paper until drops of blood form on your forehead. -- Gene Fowler [root@master ~]# curl 172.27.9.135:30002 Don't Worry, Be Happy. -- Meher Baba
Conclusion:
-
The container nginx successfully reads the contents written to the storage by the container fortune, and the emptyDir volume can realize file sharing between containers.
- The lifetime of emptyDir volumes is associated with the lifetime of pods, so when pods are deleted, the contents of volumes are lost.
3. hostPath
1. Concept
_hostPath allows file systems on Node to be mounted into Pod. If Pod needs to use files on Node, host Path can be used. A pod running on the same node and using the same path in its hostPath volume can see the same file.
2. Create pod hostpath-nginx
2.1 Create mount directories
Create a mount directory on the node, and perform the following actions on the master and each node, respectively
[root@master ~]# mkdir /data && cd /data && echo `hostname` > index.html
2.2 Create pod
[root@master ~]# more hostPath-pod.yaml apiVersion: v1 kind: Pod metadata: labels: app: prod name: hostpath-nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /usr/share/nginx/html #Container mounting point name: nginx-volume #Mount volume nginx-volume volumes: - name: nginx-volume #Volume name hostPath: path: /data #File system on node to be mounted [root@master ~]# kubectl apply -f hostPath-pod.yaml pod/hostpath-nginx created [root@master ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES emptydir-fortune 2/2 Running 0 40m 10.244.2.140 node02 <none> <none> hostpath-nginx 1/1 Running 0 16s 10.244.1.140 node01 <none> <none>
3. Access pod hostpath-nginx
[root@master ~]# curl 10.244.1.140 node01
Conclusion:
-
The pod runs on node01 and accesses the content of'node01'. For the index.html content of mounted file system / data, the container successfully reads the content of mounted node file system.
-
Use hostPath only when you need to read or write system files on a node. Never use them to persist data across pod s.
- Host Path can achieve persistent storage, but it can also cause data loss when node fails.
4. NFS Shared Storage
1. Concept
NFS is the abbreviation of Network File System, i.e. Network File System. In Kubernetes, NFS can be mounted into Pod by simple configuration, while data in NFS can be permanently saved, while NFS supports simultaneous write operations.
_emptyDir can provide file sharing among different containers, but not storage; hostPath can provide file sharing and storage for different containers, but it is restricted by nodes and can not be shared across nodes; at this time, network storage (NAS) is needed, that is to say, it is convenient to store containers and can be accessed from any cluster node. This paper takes NFS as an example to test.
2. nfs construction and configuration
For details of nfs architecture, see: NFS Server Construction and Client Connection Configuration under Centos7
After the nfs server is built and the client nfs software is installed and installed, the nfs service can be checked at master and node nodes.
[root@master ~]# showmount -e 172.27.9.181 Export list for 172.27.9.181: /backup 172.27.9.0/24
The master and node01, node02 nodes execute the showmount command to verify that the nfs service is normal and / backup is the shared directory provided by the nfs server.
The NFS content tested in this article:
3. New pod mongodb-nfs
[root@master ~]# more mongodb-pod-nfs.yaml apiVersion: v1 kind: Pod metadata: name: mongodb-nfs spec: containers: - image: mongo name: mongodb volumeMounts: - name: nfs-data #The mounted volume name is consistent with the above mongodb-data mountPath: /data/db #MongoDB Data Storage Path ports: - containerPort: 27017 protocol: TCP volumes: - name: nfs-data #Volume name nfs: server: 172.27.9.181 #nfs server ip path: /backup #Shared directory provided by nfs server [root@master ~]# kubectl apply -f mongodb-pod-nfs.yaml pod/mongodb-nfs created [root@master ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mongodb-nfs 1/1 Running 0 23s 10.244.2.142 node02 <none> <none>
Note that the ip of the pod is 10.244.2.142
4. nfs shared storage testing
4.1 Write data to MongoDB
[root@master ~]# kubectl exec -it mongodb-nfs mongo > use loong switched to db loong > db.foo.insert({name:'loong576'}) WriteResult({ "nInserted" : 1 })
Switch to db loong and insert the JSON document (name:'loong576')
4.2 View Written Data
> db.foo.find() { "_id" : ObjectId("5d6e17b018651a21e0063641"), "name" : "loong576" }
4.3 Delete pod and rebuild
[root@master ~]# kubectl delete pod mongodb-nfs pod "mongodb-nfs" deleted [root@master ~]# kubectl apply -f mongodb-pod-nfs.yaml pod/mongodb-nfs created [root@master ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mongodb-nfs 1/1 Running 0 22s 10.244.2.143 node02 <none> <none>
Delete pod mongodb-nfs and rebuild it. At this point, podip becomes 10.244.2.143. Visit MongoDB again to verify whether the documents written before are still there.
4.4 New pod reads shared storage data
[root@master ~]# kubectl exec -it mongodb-nfs mongo > use loong switched to db loong > db.foo.find() { "_id" : ObjectId("5d6e17b018651a21e0063641"), "name" : "loong576" }
Even if the pod is deleted and rebuilt, the shared data can still be accessed.
Conclusion:
- NFS Shared Storage Persistent Data
- NFS shared storage provides data sharing across nodes
V. PV and PVC
1. Concept
_Persistent Volume (PV) and Persistent Volume Claim (PVC) enable K8s cluster to have the logical abstraction ability of storage, which makes it possible to ignore the configuration of the actual background storage technology in the logic of configuring Pod, and leave the work of this configuration to the PV configurer, that is, the cluster. Manager. The relationship between stored PV and PVC is very similar to that between calculated Node and Pod; PV and Node are resource providers, which are configured by K8s cluster administrators according to changes in cluster infrastructure; while PVC and Pod are resource users, which vary according to changes in business service requirements and are used by K8s cluster. The provider is the administrator of the service to configure.
When cluster users need to use persistent storage in their pods, they first create a list of PVC, specify the minimum capacity requirements and access modes required, and then submit the list of pending volume declarations to the Kubernetes API server. Kubernetes will find a matching PV and bind it to PVC. PVC can be used as a volume in a pod, and other users cannot use the same PV unless it is released by removing the PVC binding first.
2. Create PV
2.1 nfs configuration
nfs server shared directory configuration:
[root@centos7 ~]# exportfs /backup/v1 172.27.9.0/24 /backup/v2 172.27.9.0/24 /backup/v3 172.27.9.0/24
master and node check nfs configuration:
[root@master ~]# showmount -e 172.27.9.181 Export list for 172.27.9.181: /backup/v3 172.27.9.0/24 /backup/v2 172.27.9.0/24 /backup/v1 172.27.9.0/24
2.2 PV Creation
[root@master ~]# more pv-nfs.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv001 spec: capacity: storage: 2Gi #Specified PV capacity is 2G volumeMode: Filesystem #Volume mode, which defaults to Filesystem, can also be set to'Block'to support raw block devices accessModes: - ReadWriteOnce #Access mode, which can be mounted by a single node in read/write mode persistentVolumeReclaimPolicy: Retain #Recycling strategy, Retain, means manual reclamation storageClassName: nfs #Class name, PV can have a class, and a particular class of PV can only be bound to the PVC requesting that class. nfs: #Specify NFS shared directories and IP information path: /backup/v1 server: 172.27.9.181 --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 spec: capacity: storage: 2Gi #Specified PV capacity is 2G volumeMode: Filesystem #Volume mode, which defaults to Filesystem, can also be set to'Block'to support raw block devices accessModes: - ReadOnlyMany #Access mode, which can be mounted in read-only mode by multiple nodes persistentVolumeReclaimPolicy: Retain #Recycling strategy, Retain, means manual reclamation storageClassName: nfs #Class name, PV can have a class, and a particular class of PV can only be bound to the PVC requesting that class. nfs: #Specify NFS shared directories and IP information path: /backup/v2 server: 172.27.9.181 --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 spec: capacity: storage: 1Gi #Specified PV capacity is 1G volumeMode: Filesystem #Volume mode, which defaults to Filesystem, can also be set to'Block'to support raw block devices accessModes: - ReadWriteOnce #Access mode, which can be mounted by a single node in read/write mode persistentVolumeReclaimPolicy: Retain #Recycling strategy, Retain, means manual reclamation storageClassName: nfs #Class name, PV can have a class, and a particular class of PV can only be bound to the PVC requesting that class. nfs: #Specify NFS shared directories and IP information path: /backup/v3 server: 172.27.9.181 [root@master ~]# kubectl apply -f pv-nfs.yaml persistentvolume/pv001 created persistentvolume/pv002 created persistentvolume/pv003 created [root@master ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLA*S REASON AGE pv001 2Gi RWO Retain Available nfs 26s pv002 2Gi ROX Retain Available nfs 26s pv003 1Gi RWO Retain Available nfs 26s
Create pv001, pv002 and pv003, respectively, corresponding to nfs shared directories / backup/v1, / backup/v2, / backup/v2.
Volumes can be in one of the following states:
- Available, an idle resource has not been bound by any declaration
- Bound (bound), volume has been declared bound
- Released, the declaration deleted, but the resource has not yet been re-declared by the cluster
- Failed, automatic recovery of the volume failed
There are three access modes for PV:
- First, ReadWriteOnce: It's the most basic way, readable and writable, but only supports mounting by a single Pod.
- Second, ReadOnlyMany: It can be mounted by multiple Pod s in a read-only manner.
- Third, ReadWriteMany: This storage can be shared by multiple Pod s in a read-write manner. Not every kind of storage supports these three ways, such as sharing mode, which is less supported at present, and NFS is more commonly used.
PV does not belong to any namespace. Like nodes, PV is a cluster-level resource, which is different from pod and PVC.
3. Creating PVC
3.1 PVC Creation
[root@master ~]# more pvc-nfs.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc #The name of the declaration, which is used as a volume for pod spec: accessModes: - ReadWriteOnce #Access Volume Mode, Screening One of PV Conditions volumeMode: Filesystem #Volume mode, consistent with PV, indicates that the volume is used as a file system or block device resources: #Declare that you can request a specific number of resources to filter one of the PV conditions requests: storage: 2Gi storageClassName: nfs #Request a specific class, consistent with PV, otherwise the binding cannot be completed [root@master ~]# kubectl apply -f pvc-nfs.yaml persistentvolumeclaim/mypvc created [root@master ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Bound pv001 2Gi RWO nfs 22s
Create PVC mypvc, access volume mode is ReadWriteOnce, size is 2G; WO, ROX, RWX, RWO indicate the number of working nodes that can use volumes at the same time, not the number of pod s.
3.2 View the selected PV
Screening conditions for PVC:
PV | accessModes | storage |
---|---|---|
pv001 | √ | √ |
pv002 | × | √ |
pv003 | √ | × |
PV View:
[root@master ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLA*S REASON AGE pv001 2Gi RWO Retain Bound default/mypvc nfs 12m pv002 2Gi ROX Retain Available nfs 12m pv003 1Gi RWO Retain Available nfs 12m
pv001 was selected to meet the definition of PVC, pv002 access mode does not match, pv003 size does not match.
4. Use of PVC in pod
[root@master ~]# more mongodb-pod-pvc.yaml apiVersion: v1 kind: Pod metadata: name: mongodb-pvc spec: containers: - image: mongo name: mongodb volumeMounts: - name: pvc-data mountPath: /data/db ports: - containerPort: 27017 protocol: TCP volumes: - name: pvc-data persistentVolumeClaim: claimName: mypvc #Consistent with the name declared in pvc [root@master ~]# kubectl apply -f mongodb-pod-pvc.yaml pod/mongodb-pvc created [root@master ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mongodb-pvc 1/1 Running 0 16s 10.244.2.144 node02 <none> <none>
Create pod mongodb-pvc and use PVC mypvc to test shared storage test with nfs in 4-4, without further elaboration.
All scripts and configuration files in this article have been uploaded: k8s Practice (7): Volumes and Persistent Storage