I. Docker deployment
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum install docker-ce service docker start vim /usr/lib/systemd/system/docker.service #Add to --graph /data/docker --storage-driver=overlay systemctl daemon-reload service docker restart
- If the old version is needed:
yum install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.0.ce-1.el7.centos.noarch.rpm yum install docker-ce-17.03.0.ce-1.el7.centos.x86_64 -y
2. Construction of Docker-swarm Cluster
1. Select a node as the management node
[root@jenkins-master ~]#docker swarm init --advertise-addr 192.168.0.46 docker swarm join-token worker The command is to view the use of the primary node token Commands to add nodes to clusters [root@jenkins-master ~]# docker swarm join-token worker To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-5vzw37ummd0h863jddbig798pbq3jpgoepf95g4uenodwhtc7v-543arzx838nj0pgo2yujyfko4 \ 192.168.0.46:2377
- Add nodes to the swarm cluster (executed on each node machine)
[root@jenkins47~]# docker swarm join --token SWMTKN-1-5vzw37ummd0h863jddbig798pbq3jpgoepf95g4uenodwhtc7v-543arzx838nj0pgo2yujyfko4 192.168.0.46:2377
- Viewing Cluster Node Information
[root@jenkins-master ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 5s0spx5wonjgsl8k63vfybq1v zabbix.skong Ready Active ttfs9upr4bmhyt9ssfxfixkwm jenkins47 Ready Active uc40c8karvu3pq2eupllyicz8 * jenkins-master Ready Active Leader
3. Configuring docker-swarm management node to be highly available
If there is only one management node in the swarm cluster, the cluster will not exist if the management node goes down or leaves the cluster environment. Therefore, in a cluster environment with N management nodes, there must be more than 50% of the total number of management nodes to process requests and maintain availability.
The management nodes in Swarm manage the state of all nodes by implementing Raft consistency algorithm. By implementing Raft Consensus Algorithm s, you can ensure that all tasks, storage, and so on in the cluster are in a consistent state.
Usually, a Raft cluster consists of several servers. At any given point in time, each server is in one of three states: commander, follower, or candidate. In general operations, there must be a commander, the rest of the servers are followers. Followers are passive: they do not submit any requirements themselves, but only respond to the needs of commanders and candidates. The commander handles all client requirements (if the client sends the requirement to the follower, the follower will hand it over to the commander). The third state, the candidate, is used to elect a new commander. Raft uses a heartbeat mechanism to trigger commander elections. When servers start up, they start with followers and perform followers'work until they receive a valid RPC protocol (Remote Procedure Call Protocol) sent by a commander or candidate. Commanders regularly send heartbeat messages to all followers to maintain their authorization. If a follower does not receive heartbeat information for more than a certain period of time (called "election timeout"), it will assume that the commander has expired, and then start the election to select a new commander.
[root@jenkins-master ~]# docker node promote zabbix.skong [root@jenkins-master ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 5s0spx5wonjgsl8k63vfybq1v zabbix.skong Ready Active Reachable ttfs9upr4bmhyt9ssfxfixkwm jenkins47 Ready Active uc40c8karvu3pq2eupllyicz8 * jenkins-master Ready Active Leader
Restart Leader and you will find that the Leader node will change