GlusterFS is an open-source distributed file with strong horizontal expansion ability. It can support several petabytes of storage capacity and thousands of clients. It is interconnected into a parallel network file system through the network. It has the characteristics of expansibility, high performance and high availability.
Premise: you must deploy the cluster cluster of cluster FS in the experimental environment, and create a storage volume named: gv0
1. Create endpoint with the file name of glusterfs_ep.yaml
$ vi glusterfs_ep.yaml apiVersion: v1 kind: Endpoints metadata: name: glusterfs namespace: default subsets: # Add the IP address of each cluster of GlusterFS - addresses: - ip: 10.0.0.41 - ip: 10.0.0.42 ports: # Add GlusterFS port number - port: 49152 protocol: TCP
Implementation of yaml
$ kubectl create -f glusterfs_ep.yaml endpoints/glusterfs created // View the created endpoints [root@k8s-master01 ~]# kubectl get ep NAME ENDPOINTS AGE glusterfs 10.0.0.41:49152,10.0.0.42:49152 15s
2. Create svc for the endpoint
Endpoint is the cluster node of clusterfs. If you need to access these nodes, you need to create svc
$ vi glusterfs_svc.yaml apiVersion: v1 kind: Service metadata: # The name must be the same as the name in endpoint name: glusterfs spec: ports: - port: 49152 protocol: TCP targetPort: 49152 sessionAffinity: None type: ClusterIP
Implementation of yaml
$ kubectl create -f glusterfs_svc.yaml service/glusterfs created $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE glusterfs ClusterIP 10.1.104.145 <none> 49152/TCP 20s
3. Create pv for Glusterfs
$ vi glusterfs_pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: gluster labels: type: glusterfs spec: capacity: # Specify the capacity of the pv storage: 50Gi accessModes: - ReadWriteMany glusterfs: # Specify the endpoint name of glusterfs endpoints: "glusterfs" # The path name is the volume created in glusterfs # You can log in to the glusterfs cluster and execute the "gluster volume list" command to view the created volumes path: "gv0" readOnly: false
Implementation of yaml
$ kubectl create -f glusterfs_pv.yaml persistentvolume/gluster created $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE gluster 50Gi RWX Retain Available 10s
4. Create pvc for Glusterfs
$ vi glusterfs_pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: # The name must be the same as the specified pv name: gluster spec: accessModes: - ReadWriteMany resources: requests: # Specify the capacity space of pvc storage: 20Gi
Implementation of yaml
$ kubectl create -f glusterfs_pvc.yaml persistentvolumeclaim/gluster created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE gluster Bound gluster 50Gi RWX 83s
5. Create nginx pod and mount it to the PVC nginx peng.yaml of the cluster
$ vim nginx-demo.yaml --- # Pod apiVersion: v1 kind: Pod metadata: name: nginx labels: app: web env: test spec: containers: - name: nginx image: nginx:1.13 ports: - containerPort: 80 volumeMounts: - name: data-gv0 mountPath: /usr/share/nginx/html volumes: - name: data-gv0 persistentVolumeClaim: # Bind the specified pv claimName: gluster
Implementation of yaml
$ kubectl create -f nginx-demo.yaml pod/nginx created [root@k8s-master01 ~]# kubectl get pods | grep "nginx" nginx 1/1 Running 0 2m 10.244.1.222 k8s-node01 <none> <none>
Mount / mnt to the glusterfs directory on any client and create an index.html file
$ mount -t glusterfs k8s-store01:/gv0 /mnt/ $ cd /mnt && echo "this nginx store used gluterfs cluster" >index.html
Accessing pod through curl on the master node
$ curl 10.244.1.220/index.html this nginx store used gluterfs cluster