k8s using Kube router to build highly available and extensible ingress

Keywords: Web Server curl Nginx network DNS

brief introduction

Use Kube router to realize the function of inress of k8s cluster, which is highly available and easy to expand

Environmental description

This experiment is based on the k8s cluster which has been installed and configured. For k8s installation, please refer to other blog articles. lab4, as a router, forwards lab5's request

Experimental framework

lab1: master 11.11.11.111
lab2: node 11.11.11.112
lab3: node 11.11.11.113
lab4: router 11.11.11.114
lab5: client 11.11.11.115
 Copy code

install

# In this experiment, the cluster was recreated. The cluster environment of other network plug-ins tested before was not successful # It may be due to environmental interference, so attention shall be paid during the experiment # Create Kube router directory to download related files
mkdir kube-router && cd kube-router
rm -f generic-kuberouter-all-features-dsr.yaml
wget https://raw.githubusercontent.com/mgxian/kube-router/master/generic-kuberouter-all-features-dsr.yaml

# Enable pod All functions of network communication, network isolation strategy and service agent # CLUSTERCIDR kube-controller-manager startup parameter --cluster-cidr Value # Apiserver Kube apiserver startup parameter -- advertize address value
CLUSTERCIDR='10.244.0.0/16'
APISERVER='https://11.11.11.111:6443'
sed -i "s;%APISERVER%;$APISERVER;g" generic-kuberouter-all-features-dsr.yaml
sed -i "s;%CLUSTERCIDR%;$CLUSTERCIDR;g" generic-kuberouter-all-features-dsr.yaml

# Modify configuration
 containers:
 - name: kube-router
 image: cloudnativelabs/kube-router
 imagePullPolicy: Always
 args:
 ...
 - --peer-router-ips=11.11.11.114
 - --peer-router-asns=64513
 - --cluster-asn=64512
 - --advertise-external-ip=true
 ...

# deploy
kubectl apply -f generic-kuberouter-all-features-dsr.yaml

# Delete Kube proxy
kubectl -n kube-system delete ds kube-proxy

# Execute on each node # For binary installation, use the following command
systemctl stop kube-proxy

# Execute on each node # Clean up rules left by Kube proxy
docker run --privileged --net=host registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.10.2 kube-proxy --cleanup

# See
kubectl get pods -n kube-system
kubectl get svc -n kube-system
//Copy code

test

# Please install and configure before testing kube-dns perhaps coredns # Start deployment for testing
kubectl run nginx --replicas=2 --image=nginx:alpine --port=80
kubectl expose deployment nginx --type=NodePort --name=example-service-nodeport
kubectl expose deployment nginx --name=example-service

# See
kubectl get pods -o wide
kubectl get svc -o wide

# dns and access testing
kubectl run curl --image=radial/busyboxplus:curl -i --tty
nslookup kubernetes
nslookup example-service
curl example-service
//Copy code

Configure quagga in lab4

# install
yum install -y quagga

# To configure
cat >/etc/quagga/bgpd.conf<<EOF
! -*- bgp -*-
!
! BGPd sample configuratin file
!
! $Id: bgpd.conf.sample,v 1.1 2002/12/13 20:15:29 paul Exp $
!
hostname lab4
password password
!
router bgp 64513
 bgp router-id 11.11.11.114
 maximum-paths 4
 neighbor 11.11.11.111 remote-as 64512
 neighbor 11.11.11.112 remote-as 64512
 neighbor 11.11.11.113 remote-as 64512
log stdout
EOF

# start-up
systemctl start bgpd
systemctl status bgpd
systemctl enable bgpd

# View route information
ip route
//Copy code

Test in lab4

# Modify example service to configure external ip on lab1
kubectl edit svc example-service
...
spec:
 clusterIP: 10.111.34.147
 externalIPs:
 - 11.11.111.111
...

# stay lab1 View up svc information # You can see that example service has external ip
kubectl get svc

# See lab4 Route # You can see that there are 11.11.111.111 related routes
ip route

# Accessing tests on lab4
curl 11.11.111.111
//Copy code

Test in lab5

#Add route in lab5
ip route add 11.11.111.111 via 11.11.11.114
ip route

#Access test in lab5
curl 11.11.111.111

#View ipvs in lab1
ipvsadm -L -n
 Copy code

Using DSR

#The DSR experiment failed. The experimental environment is that vagrant cooperates with virtualbox to set example service in lab1 to use DSR mode. The response of the service is sent directly to the client without lvs transfer
 Kubectl annotate SVC example service "kubectl router. IO / service. DSR = Tunnel" ා check ipvs in lab1 ා you can see Tunnel forwarding type
ipvsadm -L -n

#Access test in lab5
curl 11.11.111.111

#Analysis of node packet capturing in cluster
tcpdump -i kube-bridge proto 4
 Copy code

Clear

Cleaning up
kubectl delete svc example-service example-service-nodeport
kubectl delete deploy nginx curl
 Copy code

Reference document

  • https://cloudnativelabs.github.io/post/2017-11-01-kube-high-available-ingress/
  • https://github.com/cloudnativelabs/kube-router/blob/master/docs/generic.md

This article turns to gold digging- k8s using Kube router to build highly available and extensible ingress

Posted by eirikol on Tue, 03 Dec 2019 05:27:35 -0800