RKE:Ranchar Kubernetes Engine
https://github.com/rancher/rke
Because RKE is written by golang, download the binary files of the corresponding system directly.
Download address: https://github.com/rancher/rk...
preparation in advance
1. Ubuntu 16.04.3 LTS version is recommended; if CentOS 7 is used, version 7.3 or more is recommended.
2. The hostname of each host must be different!
3. The hosts file settings: / etc/hosts must have 127.0.0.1 localhost to be configured correctly; the hosts file contains a list of IP and names of all host nodes.
Docker
Installation through scripts provided by Rancher (following is the latest supported version of docker)
Docker version | set up script |
---|---|
18.09.2 | curl https://releases.rancher.com/... | sh |
18.06.2 | curl https://releases.rancher.com/... | sh |
17.03.2 | curl https://releases.rancher.com/... | sh |
Setting up docker user groups
RKE is installed and deployed through SSH tunnel s. SSH secret login from RKE to each node needs to be established beforehand. If there are three nodes in the cluster, the secret key generation command ssh-keygen needs to be executed once on the RKE machine, and the generated side public key is distributed through the command: ssh-copy-id {user}@{ip}.
Create ssh users on each node and add them to the docker group:
useradd dockeruser usermod -aG docker dockeruser
Note: Restart the system before it takes effect. Restart the Docker service only is not possible! After restart, docker_user users can also use the docker run command directly.
Close Selinux
Ubuntu 16.04 is not installed by default, no settings are required.
1) Modifiable configuration file under CentOS 7
vi /etc/sysconfig/selinux
2) Set SELINUX=disabled and shut down permanently after restart.
Setting up IPV4 forwarding
Must be turned on! Ubuntu 16.04 is enabled by default without setting.
1) Editable configuration file under CentOS7:
vi /etc/sysctl.conf
2) Settings:
net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
3) Enforce the following orders into force:
sudo sysctl -p
Setting up Firewall Policy
Open the 6443, 2379 and 2380 ports between cluster host nodes. If you are just beginning to try, you can close the firewall first.
systemctl stop firewalld
Ubuntu does not have UFW firewall enabled by default, no settings are required. You can also turn it off manually: sudo ufw disable
Disable Swap
Be sure to disable swap, otherwise the kubelet component will not run.
1) Permanent disablement of swap
Can be modified directly
vi /etc/fstab
File, comment out the swap item.
2) Temporary prohibition
swapoff -a
Enable cgroup
Modify the configuration file / etc/default/grub to enable the cgroup memory quota function and configure two parameters:
GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1" GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
Note: To perform sudo update-grub update grub, then restart the system and take effect.
Execute under CentOS:
grub2-mkconfig -o /boot/grub2/grub.cfg
Set SSH
1) Create a key on the host where rke is located:
ssh-keygen
2) Distribute the public key of the generated key to each node:
ssh-copy-id dockeruser@xxx.xxx.xx.xx ...
Write cluster.yml
You can enter the command rke_drawin-amd64 config (my local machine is mac, so use drawin-amd64 version) and follow the boot to complete the basic configuration.
When the configuration is complete, a cluster.yml will appear in the current directory.
Here's the content of the configuration file (I'm configuring a k8s cluster with two nodes, one master, rke can easily support HA deployment, just specify role. control lane for multiple nodes in the configuration file)
nodes: - address: xxx.xxx.xxx.xxx port: "22" internal_address: "" role: - controlplane - worker - etcd hostname_override: master user: dockeruser docker_socket: /var/run/docker.sock labels: {} - address: xxx.xxx.xxx.xxx port: "22" internal_address: "" role: - worker hostname_override: node-1 user: dockeruser docker_socket: /var/run/docker.sock ssh_key: "" ssh_key_path: "" ssh_cert: "" ssh_cert_path: "" labels: {} services: etcd: image: "" extra_args: {} extra_binds: [] extra_env: [] external_urls: [] ca_cert: "" cert: "" key: "" path: "" snapshot: null retention: "" creation: "" backup_config: null kube-api: image: "" extra_args: {} extra_binds: [] extra_env: [] service_cluster_ip_range: 10.43.0.0/16 service_node_port_range: "" pod_security_policy: false always_pull_images: false kube-controller: image: "" extra_args: {} extra_binds: [] extra_env: [] cluster_cidr: 10.42.0.0/16 service_cluster_ip_range: 10.43.0.0/16 scheduler: image: "" extra_args: {} extra_binds: [] extra_env: [] kubelet: image: "" extra_args: {} extra_binds: [] extra_env: [] cluster_domain: cluster.local infra_container_image: "" cluster_dns_server: 10.43.0.10 fail_swap_on: false kubeproxy: image: "" extra_args: {} extra_binds: [] extra_env: [] network: plugin: flannel options: {} authentication: strategy: x509 sans: [] webhook: null addons: "" addons_include: [] system_images: etcd: rancher/coreos-etcd:v3.3.10-rancher1 alpine: rancher/rke-tools:v0.1.42 nginx_proxy: rancher/rke-tools:v0.1.42 cert_downloader: rancher/rke-tools:v0.1.42 kubernetes_services_sidecar: rancher/rke-tools:v0.1.42 kubedns: rancher/k8s-dns-kube-dns:1.15.0 dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0 kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0 kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.3.0 coredns: rancher/coredns-coredns:1.3.1 coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.3.0 kubernetes: rancher/hyperkube:v1.14.6-rancher1 flannel: rancher/coreos-flannel:v0.10.0-rancher1 flannel_cni: rancher/flannel-cni:v0.3.0-rancher1 calico_node: rancher/calico-node:v3.4.0 calico_cni: rancher/calico-cni:v3.4.0 calico_controllers: "" calico_ctl: rancher/calico-ctl:v2.0.0 canal_node: rancher/calico-node:v3.4.0 canal_cni: rancher/calico-cni:v3.4.0 canal_flannel: rancher/coreos-flannel:v0.10.0 weave_node: weaveworks/weave-kube:2.5.0 weave_cni: weaveworks/weave-npc:2.5.0 pod_infra_container: rancher/pause:3.1 ingress: rancher/nginx-ingress-controller:0.21.0-rancher3 ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 metrics_server: rancher/metrics-server:v0.3.1 ssh_key_path: ~/.ssh/id_rsa ssh_cert_path: "" ssh_agent_auth: true authorization: mode: rbac options: {} ignore_docker_version: false kubernetes_version: "" private_registries: [] ingress: provider: "" options: {} node_selector: {} extra_args: {} cluster_name: "" cloud_provider: name: "" prefix_path: "" addon_job_timeout: 0 bastion_host: address: "" port: "" user: "" ssh_key: "" ssh_key_path: "" ssh_cert: "" ssh_cert_path: "" monitoring: provider: "" options: {} restore: restore: false snapshot_name: "" dns: null
Install Kubernetes
rke_drawin-amd64 up
You can see that the script begins to enter some log information
Finally appear
Finished building Kubernetes cluster successfully
The cluster installation is successful.
Verification
When the cluster is installed successfully, RKE creates a kube_config_cluster.yml file in the current directory, which is the kubeconfig file.
By default, the Kube configuration file is called. kube_config_cluster.yml. Copy this file to your local ~/.kube/config and use kubectl locally.
It should be noted that the deployed local Kube configuration name is related to the cluster configuration file. For example, if you use a configuration file named mycluster.yml, the local Kube configuration will be named. kube_config_mycluster.yml.
export KUBECONFIG=./kube_config_cluster.yml kubectl get node NAME STATUS ROLES AGE VERSION master Ready controlplane,etcd,worker 6m27s v1.14.6 node-1 Ready worker 6m5s v1.14.6
Seeing node information indicates successful installation
Add or delete nodes
RKE supports adding or deleting nodes for hosts whose roles are worker and control lane.
1) Add nodes:
To add other nodes, you only need to update the cluster configuration file with other nodes and run the cluster configuration using the same file.
2) Delete nodes:
To delete nodes, you simply delete them from the list of nodes in the cluster configuration file and re-run the rke up command.
High availability
RKE tools are highly available. You can specify multiple control panel hosts in the cluster configuration file on which RKE will deploy the master components.
By default, kubelets are configured to be the address connected to the nginx-proxy Service -- 127.0.0.1:6443, which sends requests to all primary nodes.
To start the HA cluster, you only need to specify multiple hosts using the control Lane role, and then start the cluster normally.
Delete cluster
RKE supports the rke remove command. This command performs the following actions:
Connect to each host and delete the Kubernetes service deployed on it.
Clear each host from the directory where the service is located:
- /etc/kubernetes/ssl
- /var/lib/etcd
- /etc/cni
- /opt/cni
Note that this command is irreversible and will completely destroy the Kubernetes cluster.
Probable problems in installation
ssh: handshake failed: ssh: unable to authenticate, attempted methods [publickey none], no supported methods remain
Check that the user configured in the configuration file can log on to the machine using the specified private key