1. Prerequisites for deployment
2. CA and private key
3. Deployment of kubectl
IV. Deployment of etcd
5. Deployment of flannel
1. Prerequisites for deployment
The next few articles focus steadily on the 1.14.3 version of the binary installation of kubernetes
1. Version information
docker:v17.06.0-ce etcd:v3.2.26 flannel:v0.11.0 Kubernetes:v1.14.3 OS:v7.3.1611 cfssl:v1.2.0
2. Planning for hosts
[root@master1 work]# vim /etc/hosts 192.168.192.222 master1 etcd www.mt.com 192.168.192.223 master2 etcd www.mt.com 192.168.192.224 master3 etcd www.mt.com 192.168.192.225 node1 192.168.192.226 node2 192.168.192.234 registry
Note: master 1, master 2, Master 3 are also the deployment addresses of etcd
Host names need to be set manually using: hostnamectl setp-hostname #Set each host separately, not described here
[root@master1 work]# ansible all -i /root/udp/hosts -a "src=/etc/hosts dest=/etc/hosts"
3. hosts of ansible
[root@master1 work]# cat /root/udp/hosts.ini [master] 192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=Your own password 192.168.192.223 ansible_ssh_user=root ansible_ssh_pass=Your own password 192.168.192.224 ansible_ssh_user=root ansible_ssh_pass=Your own password [master1] 192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=Your own password [master2] 192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=Your own password [master3] 192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=Your own password [node] 192.168.192.225 ansible_ssh_user=root ansible_ssh_pass=Your own password 192.168.192.226 ansible_ssh_user=root ansible_ssh_pass=Your own password [all] 192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=Your own password 192.168.192.223 ansible_ssh_user=root ansible_ssh_pass=Your own password 192.168.192.224 ansible_ssh_user=root ansible_ssh_pass=Your own password 192.168.192.225 ansible_ssh_user=root ansible_ssh_pass=Your own password 192.168.192.226 ansible_ssh_user=root ansible_ssh_pass=Your own password
4. Initialization script init.sh
[root@master1 work]# vim init.sh #!/bin/bash echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget systemctl disable firewalld && systemctl stop firewalld iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config modprobe ip_vs_rr modprobe br_netfilter [root@master1 work]# ansible all -i /root/udp/hosts -m copy "src=./init.sh dest=/opt/k8s/bin/init.sh" [root@master1 work]# ansible all -i /root/udp/hosts -a "sh /opt/k8s/bin/init.sh"
A stable clock source is required and ntpq-np synchronization is normal for all machines in the cluster
5. Kernel optimization
[root@master1 work]# vim /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 # Prohibit use of swap space, only allowed when system OOM vm.overcommit_memory=1 # Do not check if physical memory is sufficient vm.panic_on_oom=0 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=655360000 fs.nr_open=655360000 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 [root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=/etc/sysctl.d/kubernetes.conf dest=/etc/sysctl.d/kubernetes.conf" [root@master1 work]# ansible all -i /root/udp/hosts.ini -a "sysctl -p /etc/sysctl.d/kubernetes.conf"
6. Configuration Planning Information
- Cluster node address: 192.168.192.222 192.168.192.223 192.168.192.224 192.168.192.225 192.168.192.226
- Regisry address: 192.168.192.234
- Host name: master 1 master 2 Master 3 node1 node2 #corresponds to cluster node address ip
- ETCD address: https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379
- ETCD inter-node communication port: master 1= https://192.168.192.222:2380,master2=https://192.168.192.223:2380,master3=https://192.168.192.224:2380
- APIServer address: https://127.0.0.1:8443
- Network card interface used: eth0
- ETCD data directory: /data/k8s/etcd/data
- ETCD wal directory: /data/k8s/etcd/wal
- Service Segment: 10.244.0.0/16 #Unaccessible Segment for service use
- Ports used by services in nodePort mode: 30000-32767
- Cluster DNS Service IP: 10.244.0.2 #
- Pod segment: 172.30.0.0/16
- DNS domain name: cluster.local
- kubernetes Service IP: 10.244.0.1
- Binary store directory: /opt/k8s/bin
- All operations operate on the master 1 node and are then distributed to other nodes
- /opt/k8s/work/cert #for certificate content
- /opt/k8s/work/yaml #is yml content
- /opt/k8s/work/service #Work for service
2. CA and private key
CFSSL is an open source PKI/TLS tool for CloudFlare.CFSSL includes a command line tool and an HTTP API service for signing, verifying, and bundling TLS certificates.Write in Go Language.
Download address: https://pkg.cfssl.org/
Cluster certificates:
- client certificate: client certificate is only required for server-side authentication client//kubelet
- server certificate: Used by the server to authenticate the server to the client
- peer certificate: communication between two-way certificates//etcd cluster members
Certificate encoding format: - PEM(Privacy Enhanced Mail), commonly used by digital certification authorities (CA s), with extensions of.pem,.crt,.cer,.key Base64 encoded ASCII code files
- Content: --- BEGIN CERTIFICATE-- - "and" --- END CERTIFICATE-----
- DER(Distinguished Encoding Rules), in binary format.Extension.der
- CSR: Certificate Signing Request
1. Install cfssl
Download address: https://github.com/cloudflare/cfssl
#mv cfssl_linux-amd64 /opt/k8s/bin/cfssl #mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson #mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo
2. Introduction to Commands
cfssl:
- bundle: Create a certificate package containing client certificates
- genkey: Generate a key (private key) and CSR (certificate signature request)
- scan:Scan host problem
- Revoke:revoke certificate
- Certinfo: Outputs certificate information for a given certificate, just as the cfssl-certinfo tool does
- gencrl: Generate a new Certificate Revocation List
- selfsign: Generate a new self-signed key and signature certificate
- print-defaults: Print the default configuration that can be used as a template
- serve: Start an HTTP API service
- info:Get information about the remote signer
- sign: sign a client certificate, with the given CA and CA keys, and host name
- gencert: Generate a new key and signature certificate
- -ca: Certificate indicating CA
- -ca-key: Private key file indicating CA
- -config: specifies the json file requesting the certificate
- -profile: Corresponds to the profile in -config, which refers to information about generating a certificate based on the profile segment in config
Interesting to learn about: https://github.com/cloudflare/cfssl
3. Create Certificate Generation Policy
[root@master1 work]# cd /opt/k8s/work/ [root@master1 work]# vim ca-config.json { "signing": { "default": { "expiry": "26280h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "26280h" } } } }
Policy Description:
ca-config.json: Multiple profiles can be defined, specifying different expiration times, usage scenarios, and so on
signing: Indicates that a certificate can be used to sign other certificates, CA=TRUE in the generated ca.pem certificate;
server auth: indicates that the client can use the certificate to authenticate the certificate provided by the server;
client auth: indicates that the server can use the certificate to authenticate the certificate provided by the client;
4. Create csr
[root@master1 work]# vim ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "k8s", "OU": "FirstOne" } ], "ca": { "expiry": "26280h" } }
[root@master1 cert]#
(CN)Common Name: kube-apiserver extracts this field from the certificate as the requested user name (User Name), which the browser uses to verify the legitimacy of the website;
O:Organization, kube-apiserver extracts this field from the certificate as the group to which the requesting user belongs; kube-apiserver uses the extracted User, Group as the user identity for RBAC authorization;
C = <country>country
ST = <state>state, Province
L = <city>city
O = <organization>organization name/company name
OU = <organization unit>organizational unit/company Department
5. Generating ca
[root@master1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca [root@master1 work]# ls ca* ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem //Generate ca-key.pem (private key), ca.pem (public key), ca.csr (sign request) [root@master1 work]# ansible all -i /root/udp/hosts.ini -a "mkdir /etc/kubernetes/cert -pv " [root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca.pem dest=/etc/kubernetes/cert/" [root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca-key.pem dest=/etc/kubernetes/cert/" [root@master1 cert]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca-config.json dest=/etc/kubernetes/cert" [root@master1 cert]# Cfssl certinfo - cert ca.pem //View certificate contents [root@master1 cert]# cfssl certinfo -csr ca.csr //View certificate request content
3. Deployment of kubectl
1. Copy Binary Files
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kubelet dest=/opt/k8s/bin/' [root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-proxy dest=/opt/k8s/bin/' [root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a "src=/opt/k8s/bin/cloud-controller-manager dest=/opt/k8s/bin" [root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a "src=/opt/k8s/bin/apiextensions-apiserver dest=/opt/k8s/bin" [root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-apiserver dest=/opt/k8s/bin/' [root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-scheduler dest=/opt/k8s/bin/' [root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-controller-manager dest=/opt/k8s/bin/' [root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kubectl dest=/opt/k8s/bin/'
2. Create adimin csr
kubectl reads the kube-apiserver address and authentication information from the ~/.kube/config file by default
Create a certificate signing request:
[root@master1 cert]# vim admin-csr.json { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "system:masters", "OU": "FirstOne" } ] }
[root@master1 cert]#
O: value system:masters, kube-apiserver receives the certificate and sets the requested Group to system:masters;
The predefined ClusterRoleBinding cluster-admin binds Group system:masters to Role cluster-admin, which grants permissions to all API s;
The certificate will only be used by kubectl as a client certificate, so the hosts field is empty;
3. Generate certificates and private keys
[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
Generation: admin.csr admin-key.pem admin.pem
4. Generate kubeconfig file
kubeconfig is a configuration file for kubectl that contains information about apiserver and CA certificates
Default information view:
[root@master1 cert]# kubectl config view apiVersion: v1 clusters: [] #Configure the kubernetes cluster to access contexts: [] #Configure the specific context for accessing the kubernetes cluster current-context: "" #Configure the context currently in use kind: Config preferences: {} users: [] # Configure access to user information, user name, and certificate information # Set cluster parameters//Add the contents of ca.pem and server to the kubectl.config file cluster kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kubectl.kubeconfig # Set client authentication parameters//Add the public and private keys of the admin account to the kubectl.config file users [root@master1 cert]# kubectl config set-credentials admin --client-certificate=./admin.pem --client-key=./admin-key.pem --embed-certs=true --kubeconfig=kubectl.kubeconfig # Set context parameter//Add context content to the kubectl.config file kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kubectl.kubeconfig # Set Default Context kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig
Parameter: --embed-certs=true statement nests the content in the kubectl.kubeconfig file
Untimed, the path to the certificate file is written, and subsequent copies of kubeconfig to other machines require separate copies of the certificate file
5. Distributing kubectl.config
Distribute the kubectl.config file to all master nodes
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m shell -a " mkdir ~/kube" [root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kubectl.kubeconfig dest=~/.kube/config"
IV. Deployment of etcd
Etcd is used for transaction discovery, shared configuration, and concurrency control (such as leader elections, distributed locks, and so on).kubernetes uses etcd to store all running data
1. Distribution Profiles
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcdctl dest=/opt/k8s/bin" [root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd dest=/opt/k8s/bin"
2. Create certificates for etcd
[root@master1 cert]# vim etcd-csr.json { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.192.222", "192.168.192.223", "192.168.192.224" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "k8s", "OU": "FirstOne" } ] } [root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd [root@master1 cert]# ls etcd* etcd.csr etcd-csr.json etcd-key.pem etcd.pem
3. Distributing certificates and private keys
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd.pem dest=/etc/etcd/cert/" [root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd-key.pem dest=/etc/etcd/cert/"
4. Configure service
[root@master1 service]# vim etcd.service.template [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/data/k8s/etcd/data ExecStart=/opt/k8s/bin/etcd \ --data-dir=/data/k8s/etcd/data \ --wal-dir=/data/k8s/etcd/wal \ --name=NODE_NAME \ --cert-file=/etc/etcd/cert/etcd.pem \ --key-file=/etc/etcd/cert/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/cert/ca.pem \ --peer-cert-file=/etc/etcd/cert/etcd.pem \ --peer-key-file=/etc/etcd/cert/etcd-key.pem \ --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \ --peer-client-cert-auth \ --client-cert-auth \ --listen-peer-urls=https://NODE_IP:2380 \ --initial-advertise-peer-urls=https://NODE_IP:2380 \ --listen-client-urls=https://NODE_IP:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://NODE_IP:2379 \ --initial-cluster-token=etcd-cluster-0 \ --initial-cluster=master1=https://192.168.192.222:2380,master2=https://192.168.192.223:2380,master3=https://192.168.192.224:2380 \ --initial-cluster-state=new \ --auto-compaction-retention=1 \ --max-request-bytes=33554432 \ --quota-backend-bytes=6442450944 \ --heartbeat-interval=250 \ --election-timeout=2000 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@master1 service]# sed 's/NODE_NAME/master1/g;s/NODE_IP/192.168.192.222/g;s/\\//' etcd.service &> ./etcd.service.master1 [root@master1 service]# sed 's/NODE_NAME/master2/g;s/NODE_IP/192.168.192.223/g' etcd.service &> ./etcd.service.master2 [root@master1 service]# sed 's/NODE_NAME/master3/g;s/NODE_IP/192.168.192.224/g' etcd.service &> ./etcd.service.master3 [root@master1 service]# ansible master -i /root/udp/hosts.ini -a "mkdir -pv /data/k8s/etcd/data" [root@master1 service]# ansible master -i /root/udp/hosts.ini -a "mkdir -pv /data/k8s/etcd/wal" [root@master1 service]# scp etcd.service.master1 root@master1:/etc/systemd/system/etcd.service [root@master1 service]# scp etcd.service.master2 root@master2:/etc/systemd/system/etcd.service [root@master1 service]# scp etcd.service.master3 root@master3:/etc/systemd/system/etcd.service
5. Explanation of parameters
name=NODE_NAME #Node name cert-file=/etc/etcd/cert/etcd.pem #etcd's public key key-file=/etc/etcd/cert/etcd-key.pem #Private key for etcd trusted-ca-file=/etc/kubernetes/cert/ca.pem #Private key of ca peer-cert-file=/etc/etcd/cert/etcd.pem # peer-key-file=/etc/etcd/cert/etcd-key.pem #ca private key of internal member of etcd cluster peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem #Trusted Certificates peer-client-cert-auth # client-cert-auth # listen-peer-urls=https://NODE_IP:2380 #url listening for node communication initial-advertise-peer-urls=https://NODE_IP:2380 #listening url for client communication listen-client-urls=https://NODE_IP:2379,http://127.0.0.1:2379 #Recommended url for listening for client communication advertise-client-urls=https://NODE_IP:2379 # initial-cluster-token=etcd-cluster-0 #The token value of a node, which is set to generate a unique id for each node and a unique id for each node. When a cluster is started again with the same configuration file, the etcd clusters will not interact as long as the token value is different. initial-cluster=master1 # initial-cluster-state=new #Flags for a new cluster auto-compaction-retention=1 # Automatic compression retention time (hours) for MVCC key value storage.0 means that automatic compression is disabled. max-request-bytes=33554432 # quota-backend-bytes=6442450944 # heartbeat-interval=250 #runtastic Heart Rate PRO election-timeout=2000 #Election timeout
6. Start Services
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "chmod +x /opt/k8s/bin/* " [root@master1 service]# ansible master -i /root/udp/hosts.ini -a "systemctl start etcd.service" [root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload;systemctl restart etcd.service" [root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl enable etcd.service"
7. etcd service validation
[root@master1 service]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl -w table --cacert=/etc/kubernetes/cert/ca.pem --cert=/etc/etcd/cert/etcd.pem --key=/etc/etcd/cert/etcd-key.pem --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 endpoint status +------------------------------+------------------+---------+---------+-----------+-----------+------------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX | +------------------------------+------------------+---------+---------+-----------+-----------+------------+ | https://192.168.192.222:2379 | 2c7d8b7aa58766f3 | 3.2.26 | 25 kB | true | 29 | 15 | | https://192.168.192.223:2379 | 257fa42984b72360 | 3.2.26 | 25 kB | false | 29 | 15 | | https://192.168.192.224:2379 | 3410f89131d2eef | 3.2.26 | 25 kB | false | 29 | 15 | +------------------------------+------------------+---------+---------+-----------+-----------+------------+ [root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem --cert-file=/opt/k8s/work/cert/flanneld.pem --key-file=/opt/k8s/work/cert/flanneld-key.pem member list 3410f89131d2eef: name=master3 peerURLs=https://192.168.192.224:2380 clientURLs=https://192.168.192.224:2379 isLeader=false 257fa42984b72360: name=master2 peerURLs=https://192.168.192.223:2380 clientURLs=https://192.168.192.223:2379 isLeader=false 2c7d8b7aa58766f3: name=master1 peerURLs=https://192.168.192.222:2380 clientURLs=https://192.168.192.222:2379 isLeader=true
5. Deployment of flannel
1. Send down binary files
[root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=../../bin/flanneld dest=/opt/k8s/bin" [root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=/root/k8s/mk-docker-opts.sh dest=/opt/k8s/bin/" [root@master1 service]# ansible all -i /root/udp/hosts.ini -m shell -a "chmod +x /opt/k8s/bin/*"
2. Create certificates and private keys
[root@master1 cert]# vim flanneld-csr.json { "CN": "flanneld", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "HangZhou", "L": "HangZhou", "O": "k8s", "OU": "FirstOne" } ] } [root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld [root@master1 cert]# ansible all -i /root/udp/hosts.ini -m shell -a "mkdir /etc/flanneld/cert" [root@master1 cert]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./flanneld.pem dest=/etc/flanneld/cert" [root@master1 cert]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./flanneld-key.pem dest=/etc/flanneld/cert/"
3. Write pod segment information
[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem --cert-file=/opt/k8s/work/cert/flanneld.pem --key-file=/opt/k8s/work/cert/flanneld-key.pem mk /kubernetes/network/config '{"Network":"'172.30.0.0/16'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}' {"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}} [root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem --cert-file=/opt/k8s/work/cert/flanneld.pem --key-file=/opt/k8s/work/cert/flanneld-key.pem get /kubernetes/network/config {"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}
4. service Configuration
[root@master1 service]# vim flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target Before=docker.service [Service] Type=notify ExecStart=/opt/k8s/bin/flanneld \ -etcd-cafile=/etc/kubernetes/cert/ca.pem \ -etcd-certfile=/etc/flanneld/cert/flanneld.pem \ -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \ -etcd-endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 \ -etcd-prefix=/kubernetes/network \ -iface=eth0 \ -ip-masq ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=always RestartSec=5 StartLimitInterval=0 [Install] WantedBy=multi-user.target RequiredBy=docker.service [root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./flanneld.service dest=/etc/systemd/system/" [root@master1 ~]# ansible all -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload ;systemctl restart flanneld.service &&systemctl status flanneld.service" [root@master1 ~]# ansible all -i /root/udp/hosts.ini -m shell -a "systemctl enable flanneld.service"
The mk-docker-opts.sh script writes the PD subnet segment information assigned to flanneld to the/run/flannel/docker file, which is then used to configure the docker0 bridge using the environment variables (DOCKER_NETWORK_OPTIONS) in the file when the docker starts.
Ip-mqsq: The variable that flanneld passes to Docker when setting SNAT rules for accessing traffic outside the Pod network--ip-masq (/run/flannel/docker file) set to false
flanneld creates SNAT rules that are gentle relative to docker and only do SNAT for requests to access non-Pod segments.
5. View pod segment information
[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/flanneld/cert/flanneld.pem --key-file=/etc/flanneld/cert/flanneld-key.pem ls /kubernetes/network/subnets /kubernetes/network/subnets/172.30.8.0-21 /kubernetes/network/subnets/172.30.96.0-21 /kubernetes/network/subnets/172.30.64.0-21 /kubernetes/network/subnets/172.30.32.0-21 /kubernetes/network/subnets/172.30.56.0-21
View the node IP and flannel interface addresses corresponding to the OD segment requested by a node:
[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/flanneld/cert/flanneld.pem --key-file=/etc/flanneld/cert/flanneld-key.pem get /kubernetes/network/subnets/172.30.8.0-21 {"PublicIP":"192.168.192.223","BackendType":"vxlan","BackendData":{"VtepMAC":"f6:e1:42:b9:35:70"}} PublicIP:192.168.192.223 //The assigned node is 192.168.192.223, and its segment is 172.30.8.0/21 VtepMAC:f6:e1:42:b9:35:70 192.168.192.223 Node's flannel.1 Network card MAC address [root@master1 service]# cat /run/flannel/docker DOCKER_OPT_BIP="--bip=172.30.96.1/21" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.30.96.1/21 --ip-masq=false --mtu=1450" [root@master1 service]# cat /run/flannel/subnet.env FLANNEL_NETWORK=172.30.0.0/16 FLANNEL_SUBNET=172.30.96.1/21 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true
flanneld writes his own segment information to: /run/flannel/docker
The docker then uses the environment variables in this file to set up the docker0 bridge to assign IP from this address segment to all Pod containers of this node.
Reference web address:
https://github.com/cloudflare/cfssl
https://kubernetes.io/docs/setup/
https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/04.%E9%83%A8%E7%BD%B2etcd%E9%9B%86%E7%BE%A4.md