Installing Kubernetes1 with kubeadm

Keywords: Docker Kubernetes yum network

Installing Kubernetes 1.13.3 using kubeadm

Preparatory environment: two CentOS 7 system servers, 2 cores, 2G

host name Ip role
K8s-master 192.168.2.7 K8s master node
K8s-node1 192.168.137.4 K8s slave node 1

Docker version: 1.18.3

Kubernetes Version: 1.13.3

(1) Preparations

(1) All nodes close the firewall

systemctl disable firewalld.service 
systemctl stop firewalld.service

(2) Disabling SELINUX

setenforce 0

vi /etc/selinux/config
SELINUX=disabled

(3) All nodes close swap

swapoff -a

(4) Setting the host name of all nodes

hostnamectl --static set-hostname  k8s-master
hostnamectl --static set-hostname  k8s-node1

(5) Host name/IP join hosts resolution for all nodes

vim /etc/hosts
192.168.2.7 k8s-master
192.168.137.4 k8s-node1

(2) Installation of docker

(1) Set up yum source in China and download docker

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

(2) Install the specified version of docker

yum -y install docker-ce-18.06.1.ce-3.el7

(3) Set up boot to start docker and start docker process

systemctl enable docker && systemctl start docker

(4) View the docker version

docker --version

(3) Installation of kubernetes

(1) Setting up the domestic yum source of kubernetes

vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

(2) Install kubelet, kubeadm,kubectl,cni

yum install -y kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 kubernetes-cni-0.6.0-0

(4) Preparations for kubeadm init

(1) Download the image needed to start kubernetes

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.3
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.3
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.3
docker pull mirrorgooglecontainers/kube-proxy:v1.13.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

(2) Rename the download image to kubernetes and start the default image name

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
docker tag mirrorgooglecontainers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

(3) Delete the original image

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.3           
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.3  
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.3           
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.3               
docker rmi mirrorgooglecontainers/pause:3.1                        
docker rmi mirrorgooglecontainers/etcd:3.2.24                      
docker rmi coredns/coredns:1.2.6
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

(4) Setting Kernel Parameters

cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

Make configuration effective

sysctl -p /etc/sysctl.d/k8s.conf

(5) Start kubernetes and set up boot-up self-start

systemctl enable kubelet && systemctl start kubelet

Note that kubelet can't start properly at this time. You can check / var/log/messages for error messages. It's normal to wait for initialization. Please execute the above steps on all nodes of Kubernetes.

(5) Initialize cluster, kubeadm init

(1) master node, execute initialization command

kubeadm init --kubernetes-version=v1.13.3 --apiserver-advertise-address 192.168.2.7 --pod-network-cidr=10.244.0.0/16

· - apiserver-advertise-address: Specifies which IP address of Master is used to communicate with other nodes of Cluster.

· service-cidr: Specifies the scope of the Service network, that is, the IP address segment used by the load balancing VIP.

· - pod-network-cidr: Specifies the scope of the Pod network, that is, the IP address segment of the Pod.

· image-repository: The default Registries address for Kubenets is k8s.gcr.io It can't be visited at home. gcr.io In version 1.13, we can add the - image-repository parameter. The default value is k8s.gcr.io To designate it as the Ali Cloud Mirror Address: registry.aliyuncs.com/google_containers.

· - kubernetes-version=v1.13.3: Specify the version number to install.

ignore-preflight-errors=: Ignore runtime errors, such as [ERROR NumCPU] and [ERROR Swap], and ignoring these two errors is an increase

–ignore-preflight-errors=NumCPU

–ignore-preflight-errors=Swap

(2) Successful initialization results are as follows

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.2.7:6443 --token i8nxlt.ox0bzax19jak1tyq --discovery-token-ca-cert-hash sha256:02e8fd59a30c53e792f5f822409762bfab5aef329fd24c48f994a20f752c5738

(3) Node node joins master cluster

kubeadm join 192.168.2.7:6443 --token i8nxlt.ox0bzax19jak1tyq --discovery-token-ca-cert-hash sha256:02e8fd59a30c53e792f5f822409762bfab5aef329fd24c48f994a20f752c5738

(4) Configuring kubectl on master node

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile 
echo $KUBECONFIG

(5) Installing pod network

Installation of Pod network is a necessary condition for communication between Pods. k8s supports many network schemes. Here we still choose the classic flannel scheme.

sysctl net.bridge.bridge-nf-call-iptables=1
vim kube-flannel.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: arm
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: s390x
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.10.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
kubectl apply -f kube-flannel.yaml

Once the Pod network is installed, the following commands can be executed to check whether CoreDNS Pod is working properly at the moment, and once it is running properly, follow-up steps can be continued.

(6) View master's pod status

kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-78d4cf999f-8p4v8             1/1     Running   1          2d22h   10.244.0.4      k8s-master   <none>           <none>
kube-system   coredns-78d4cf999f-xmnxj             1/1     Running   1          2d22h   10.244.0.5      k8s-master   <none>           <none>
kube-system   etcd-k8s-master                      1/1     Running   1          2d22h   192.168.2.7     k8s-master   <none>           <none>
kube-system   kube-apiserver-k8s-master            1/1     Running   1          2d22h   192.168.2.7     k8s-master   <none>           <none>
kube-system   kube-controller-manager-k8s-master   1/1     Running   1          2d22h   192.168.2.7     k8s-master   <none>           <none>
kube-system   kube-flannel-ds-amd64-95vw6          1/1     Running   1          2d22h   192.168.2.7     k8s-master   <none>           <none>
kube-system   kube-flannel-ds-amd64-j2xht          1/1     Running   0          41h     192.168.137.4   k8s-node1    <none>           <none>
kube-system   kube-proxy-2d9n8                     1/1     Running   0          41h     192.168.137.4   k8s-node1    <none>           <none>
kube-system   kube-proxy-6v8pz                     1/1     Running   1          2d22h   192.168.2.7     k8s-master   <none>           <none>
kube-system   kube-scheduler-k8s-master            1/1     Running   1          2d22h   192.168.2.7     k8s-master   <none>           <none>

All of them are running, which means that the startup is successful.

(7) Viewing the running status of nodes

kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   2d22h   v1.13.3
k8s-node1    Ready    <none>   41h     v1.13.3

If the pod state is not running and the node is not ready, the problem can be sorted out through the log.

journalctl -f -u kubelet

You can also look at some common problems at the following address

https://blog.csdn.net/qq_34857250/article/details/82562514

init or join failed, kubeadm reset reset reset, try again

(6) token expires and nodes rejoin the cluster

(1) The token above is valid for one day, and you can view the token of the cluster.

kubeadm token list

(2) Create a permanent token that will not expire and is not recommended for security.

kubeadm token create --ttl 0

(3) View token values

kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
kgiide.777z4nmpgtamq2q8   <invalid>   2019-07-31T11:49:10+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
yu1ieo.lg1r6cpe5vlzgolg   <forever>   <never>                     authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

(4) Obtaining the hash value of sha256 code of ca certificate

$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

(5) The newly generated token and ca certificate sha256 encoding hash value reassembly is used to effectively join the cluster command. Before joining, the previously joined nodes can be deleted first.

kubectl delete node k8s-node1
kubeadm join 192.168.2.7:6443 --token yu1ieo.lg1r6cpe5vlzgolg --discovery-token-ca-cert-hash sha256:02e8fd59a30c53e792f5f822409762bfab5aef329fd24c48f994a20f752c5738

Reprinted place
https://blog.csdn.net/qq_34857250/article/details/82562514
https://www.kubernetes.org.cn/4956.html

Posted by callisto11 on Thu, 01 Aug 2019 20:29:52 -0700