Nginx-ingress-controller-0.32.0 Deployment Guide

Keywords: Docker Kubernetes Nginx

Nginx-ingress-controller-0.32.0 Deployment Guide

Introduction: nginx is the entrance controller of Kubernetes. Nginx is used as the reverse agent and load balancer, while nginx ingress controller is the control software of nginx. It is built around Kubernetes entrance resources and uses ConfigMap to store nginx configuration

k8s cluster construction

Cluster deployment prerequisites

  • Network interworking between all machines in the cluster
  • You can access the Internet
  • Disable swap partition

Prepare environment

  • 10.7.10.233 k8s-master
  • 10.7.10.163 k8s-node

System environment configuration

The following operations are required for both master and node

  • Turn off firewall

    systenctl stop firewalld
    
  • Close selinux

    setenforce 0
    
  • Close swap

    • Edit / etc/fstab and comment out the swap line
    vim /etc/fstab
    
    • The modification is as follows:
    16 #/dev/mapper/uos-swap    none                    swap    defaults        0 0
    
  • Set host name

    hostnamectl set-hostname "hostname"
    
    • The example is modified as follows
     hostnamectl set-hostname k8s-master
    
  • Configuring hosts simplifies access between hosts in a LAN

    cat >> /etc/hosts << EOF
    10.7.10.233 k8s-master
    10.7.10.163 k8s-node
    EOF
    
  • Deliver the bridged IPv4 traffic to the Iptable chain

    cat > /etc/sysctl.d/k8s.conf << EOF
    net.ipv4.ip_forward=1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    • Load br_netfilter
    modprobe br_netfilter
    echo "modprobe br_netfilter" >> /etc/rc.local
    
    • Refresh the configuration to take effect
    sysctl -p /etc/sysctl.d/k8s.conf
    
    • Set iptables policy to ACCEPT
    /sbin/iptables -P FORWARD ACCEPT
    echo "sleep 60 && /sbin/iptables -P FORWARD ACCEPT" >> /etc/rc.local
    
  • System time synchronization

yum install ntp ntpdate -y
ntpdate time.windows.com
  • Enable ntpd to boot automatically

    systemctl enable ntpd
    

Docker installation and configuration

The following operations are required for both master and node

  • Install docker

    yun install docker -y 
    
  • Confirm docker version

    docker --version
    
  • Configure docker image warehouse address

    vim /etc/docker/daemon.json
     
    {
      "registry-mirrors" : [
        "http://ovfftd6p.mirror.aliyuncs.com",
        "http://registry.docker-cn.com",
        "http://docker.mirrors.ustc.edu.cn",
        "http://hub-mirror.c.163.com"
      ],
      "insecure-registries" : [
        "registry.docker-cn.com",
        "docker.mirrors.ustc.edu.cn"
      ],
      "debug" : true,
      "experimental" : true
    }
    
  • Start the docker service and start the docker automatically

    systemctl enable docker && systemctl start docker
    
  • Confirm that the docker service is started successfully

    systemctl status docker
    

k8s tool installation and configuration

  • Add k8s the yum source for

    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
    EOF
    
    
    x86 Schema, please baseurl Replace with:
    https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    
  • Generated k8s by yum metadata

    yum clean all
    yum makecache 
    
  • Installation k8s tools

    yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
    
  • Enable kubelet to boot automatically

    systemctl enable kubelet
    

Kubedm deployment k8s

  • master node initialization

    kubeadm init \
      --apiserver-advertise-address=10.7.10.235 \
      --image-repository registry.aliyuncs.com/google_containers \
      --kubernetes-version v1.18.0 \
      --service-cidr=10.7.0.0/12 \
      --ignore-preflight-errors=all
    
    • master initialization succeeded. The echo is as follows:
      [the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-whal7jmq-1638411508696) (/ home / KLD / pictures / k8s / init successes. PNG)]
  • After the master node is initialized successfully, execute the following commands on the master node as prompted

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  • Follow the prompts after the master node is initialized successfully, and execute the following commands on the node

    • The join command shall be subject to the actual initialization result. The following is the join command echoed in this deployment
    kubeadm join 10.7.10.235:6443 --token hqld3z.uinikbik6vs09h3z \
      --discovery-token-ca-cert-hash sha256:90ac37425f63d0f8ef95ef23a16202c43dfff8433ea91285d1830bbd86052b9a
    
    • After adding the master successfully, the node echo is as follows:

      [the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-jXTDaX9l-1638411508698)(/home/kld/Pictures/k8s/k8s-join.png)]

  • Deploy calico plug-in

    • Get profile
    wget https://docs.projectcalico.org/manifests/calico.yaml
    
  • Configure calico

    • Modify the calico configuration to indicate the IP address segment that can be used by the pod network, which is the same as the cluster. This configuration is modified to 10.7.0.0
    vim calico.yaml
    
  • The example is modified as follows. The modification of ip segment shall be subject to the actual situation

    3680             # The default IPv4 pool to create on startup if none exists. Pod IPs will be
    3681             # chosen from this range. Changing this value after installation will have
    3682             # no effect. This should fall within `--cluster-cidr`.
    3683             - name: CALICO_IPV4POOL_CIDR
    3684               value: "10.7.0.0/16"
    
  • Import calico profile

    kubectl apply -f calico.yaml
    
  • Verify cluster operation

    • View calico status
    kubectl get pods -n kube-system
    

[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-X2PNSx4g-1638411508698)(/home/kld/Pictures/k8s/calico.png)]

  • After calico running status, view the node status

     kubectl get nodes
    
    • The echo is as follows, and the nodes are all ready
      [the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-Mv5fX4LX-1638411508699)(/home/kld/Pictures/k8s/node_read.png)]

Deploy nginx ingress controller

  • Get the nginx-ingress-controller-0.32.0 tool source code package and decompress it into the source directory

    wget https://github.com/kubernetes/ingress-nginx/archive/refs/tags/controller-0.32.0.tar.gz
    
    tar xf controller-0.32.0.tar.gz
    cd ingress-nginx-controller-0.32.0/deploy/static/provider/cloud
    
  • Import configuration

    kubectl apply -f deploy.yaml
    

Posted by Geuis on Wed, 01 Dec 2021 18:26:57 -0800