1. description
This document is for docker swarm operations.
The system for is a local test system. The machine information is as follows, 172.16.1.13 is the manager of docker swarm.
Machine list information for local test:
host name |
Simulated extranet |
Intranet IP |
To deploy a module |
mini01 |
10.0.0.11 |
172.16.1.11 |
tomcat [swarm management] |
mini02 |
10.0.0.12 |
172.16.1.12 |
tomcat [swarm management] |
mini03 |
10.0.0.13 |
172.16.1.13 |
visualizer? docker swarm status view Hadoop namnode [swarm management] |
2. docker swarm initialization
According to the plan, operate on the machine 172.16.1.13:
1 [root@mini03 ~]# docker swarm init # For machines with only one IP 2 Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on different interfaces (172.16.1.13 on eth0 and 10.0.0.13 on eth1) - specify one with --advertise-addr 3 [root@mini03 ~]# docker swarm init --advertise-addr 172.16.1.13 # For machines with multiple IPS, you need to specify one IP, usually the intranet IP 4 Swarm initialized: current node (yo5f7qb28gf6g38ve4xhcis17) is now a manager. 5 6 To add a worker to this swarm, run the following command: 7 # Execute on other machines, so that you can join the swarm management 8 docker swarm join --token SWMTKN-1-4929ovxh6agko49u0yokrzustjf6yzt30iv1zvwqn8d3pndm92-0kuha3sa80u2u27yca6kzdbnb 172.16.1.13:2377 9 10 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Get the command to join the swarm
1 [root@mini03 ~]# docker swarm join-token worker 2 To add a worker to this swarm, run the following command: 3 # Execute on other machines, so that you can join the swarm management 4 docker swarm join --token SWMTKN-1-4929ovxh6agko49u0yokrzustjf6yzt30iv1zvwqn8d3pndm92-0kuha3sa80u2u27yca6kzdbnb 172.16.1.13:2377
3. Initialize the network
Initialize a swarm network and let system components use the specified network.
1 [root@mini03 ~]# docker network create -d overlay --attachable zhang 2 vu07em5fvpuojih6wgckdkdzj 3 [root@mini03 docker-swarm]# docker network ls # View network 4 NETWORK ID NAME DRIVER SCOPE 5 fa8a244c6bd5 bridge bridge local 6 51c95dea1e5c docker_gwbridge bridge local 7 7a7e31f4bce8 host host local 8 5hgg372xwxbl ingress overlay swarm 9 lmt3pjswf7l0 zhang overlay swarm 10 5ea08e9a282f none null local 11 [root@mini03 ~]# docker network inspect zhang # View network information 12 [ 13 { 14 "Name": "zhang", 15 "Id": "xiykborz8hn2td40ykhi20dck", 16 "Created": "0001-01-01T00:00:00Z", 17 "Scope": "swarm", 18 "Driver": "overlay", 19 "EnableIPv6": false, 20 "IPAM": { 21 "Driver": "default", 22 "Options": null, 23 "Config": [] 24 }, 25 "Internal": false, 26 "Attachable": true, 27 "Ingress": false, 28 "ConfigFrom": { 29 "Network": "" 30 }, 31 "ConfigOnly": false, 32 "Containers": null, 33 "Options": { 34 "com.docker.network.driver.overlay.vxlanid_list": "4097" 35 }, 36 "Labels": null 37 } 38 ]
Delete network [use with caution]
Delete the zhang network in docker
1 [root@mini03 docker-swarm]# docker network rm zhang 2 zhang 3 [root@mini03 docker-swarm]# docker network ls 4 NETWORK ID NAME DRIVER SCOPE 5 fa8a244c6bd5 bridge bridge local 6 51c95dea1e5c docker_gwbridge bridge local 7 7a7e31f4bce8 host host local 8 5hgg372xwxbl ingress overlay swarm 9 5ea08e9a282f none null local
4. Join or exit swarm management
Execute the command of docker swarm join -- token swmtkn-1-4929ovxh6agko 49u0yokrzustjf6yzt30iv1zvwqn8d3pndm92-0kuha3sa80u2u27yca6kzdbnb 172.16.1.13:2377 on mini01 and mini02.
4.1. What are the current swarm nodes
1 [root@mini03 ~]# docker node ls 2 ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 3 2pfwllgxpajx5aitlvcih9vsq mini01 Ready Active 17.09.0-ce 4 zho14u85itt5l2i6cpg8fcd6t mini02 Ready Active 17.09.0-ce 5 yo5f7qb28gf6g38ve4xhcis17 * mini03 Ready Active Leader 17.09.0-ce
4.2. Exit the current swarm node
1 # Operation on swarm manager mini03 2 # 2 of which pfwllgxpajx5aitlvcih9vsq yes mini01 stay swarm On the machine ID,according to docker node ls Obtain 3 [root@mini03 ~]# docker node rm --force 2pfwllgxpajx5aitlvcih9vsq # If mini01 Upper docker Without stopping the service, you need to use the --force option 4 2pfwllgxpajx5aitlvcih9vsq 5 [root@mini03 ~]# docker node ls 6 ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 7 zho14u85itt5l2i6cpg8fcd6t mini02 Ready Active 17.09.0-ce 8 yo5f7qb28gf6g38ve4xhcis17 * mini03 Ready Active Leader 17.09.0-ce 9 ########################################## 10 # Commands that need to be executed on mini01 so that mini01 can completely exit swarm management 11 [root@mini01 ~]# docker swarm leave 12 Node left the swarm.
4.3. swarm manager exits swarm
You need to delete all nodes first, and then force to exit swarm
1 [root@mini03 ~]# docker node ls 2 ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 3 yo5f7qb28gf6g38ve4xhcis17 * mini03 Ready Active Leader 17.09.0-ce 4 [root@mini03 ~]# docker swarm leave --force # swarm Manager exit swarm,Need --force parameter 5 Node left the swarm. 6 [root@mini03 ~]# docker node ls 7 Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
4.4. What are the current swarm services
1 [root@mini03 ~]# docker service ls # Just an example, not the actual data 2 ID NAME MODE REPLICAS IMAGE PORTS 3 lq7zkkal6ujt hadoop_datanode global 2/2 bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8 4 ph2fu37k886b hadoop_namenode replicated 1/1 bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8 *:50070->50070/tcp 5 ca47u5i2ubes hbase-master replicated 1/1 bde2020/hbase-master:1.0.0-hbase1.2.6 *:16010->16010/tcp 6 mkks4oa2ppcn hbase-regionserver-1 replicated 1/1 bde2020/hbase-regionserver:1.0.0-hbase1.2.6 7 j4mhizg4j67p hbase-regionserver-2 replicated 1/1 bde2020/hbase-regionserver:1.0.0-hbase1.2.6 8 yndrkc2bcpra hbase_zoo1 replicated 1/1 zookeeper:3.4.10 *:2181->2181/tcp 9 r5ycrvo0zout spark_spark replicated 1/1 zhang/spark:latest *:4040->4040/tcp,*:7777->7777/tcp,*:8081->8081/tcp,*:18080->8080/tcp 10 f2v091nz24rg tomcat_tomcat global 2/2 zhang/tomcat:latest *:6543->6543/tcp,*:9999->9999/tcp,*:18081->8081/tcp 11 clfpryaerq2l visualizer replicated 1/1 dockersamples/visualizer:latest *:8080->8080/tcp
5. swarm label management
5.1. Label addition
According to the initial host and component deployment plan, the label plan is as follows: execute on the swarm manager mini03.
1 # Label the mini01 machine 2 docker node update --label-add tomcat=true mini01 3 docker node update --label-add datanode=true mini01 4 docker node update --label-add hbase-regionserver-1=true mini01 5 6 # Label the mini02 machine 7 docker node update --label-add tomcat=true mini02 8 docker node update --label-add datanode=true mini02 9 docker node update --label-add hbase-regionserver-2=true mini02 10 11 # Label the mini03 machine 12 docker node update --label-add spark=true mini03 13 docker node update --label-add zookeeper=true mini03 14 docker node update --label-add namenode=true mini03 15 docker node update --label-add hbase-master=true mini03
5.2. Delete label
Execute on the swarm manager mini03, as shown below:
1 docker node update --label-rm zookeeper mini03
5.3. View the current label of swarm
Execute on the swarm manager mini03:
1 [root@mini03 ~]# docker node ls -q | xargs docker node inspect -f '{{.ID}}[{{.Description.Hostname}}]:{{.Spec.Labels}}' 2 6f7dwt47y6qvgs3yc6l00nmjd[mini01]:map[tomcat:true datanode:true hbase-regionserver-1:true] 3 5q2nmm2xaexhkn20z8f8ezglr[mini02]:map[tomcat:true datanode:true hbase-regionserver-2:true] 4 ncppwjknhcwbegmliafut0718[mini03]:map[hbase-master:true namenode:true spark:true zookeeper:true]
6. View log
When starting the container, view the related logs, such as the following:
1 docker stack ps hadoop 2 docker stack ps hadoop --format "{{.Name}}: {{.Error}}" 3 docker stack ps hadoop --format "{{.Name}}: {{.Error}}" --no-trunc 4 docker stack ps hadoop --no-trunc