Implement LVS scheduling and high availability of lvs+keeplive

Keywords: Linux curl ssh yum network

1. Briefly describe the four cluster characteristics and usage scenarios of lvs

LVS has three load balancing modes, VS/NAT (nat mode), VS/DR (routing mode), VS/TUN (tunnel mode), VS/FULLNAT
 1. NAT mode (VS-NAT)
Principle: The destination address of the IP header of a packet sent by a client is replaced by the IP address of one of the RSs on the load balancer and sent to the RS for processing. After the RS is processed, the data is handed over to the load balancer. The load balancer changes the original IP address of the packet to its own IP address and changes the destination address to the client IP address during this period, whether it is incoming traffic or outgoing traffic.All must pass through the load balancer
 Benefits: The physical servers in the cluster can use any TCP/IP operating system, and only the load balancer needs a legitimate IP address
 Disadvantages: Limited scalability.When server nodes (ordinary PC servers) grow too large, the load balancer becomes a bottleneck for the whole system
 Because the flow direction of all request and answer packages passes through the load balancer.When there are too many server nodes
 A large number of packets converge on the load balancer, and the speed will be slower!
II. IP Tunneling Mode (VS-TUN)
Principle: First of all, it is important to know that most Internet services on the Internet have very short request packets, while reply packets are usually very large. Tunneling mode is to unpack the packet from the client, encapsulate a new IP header tag (destination IP only) and send it to RSRS, then unpack the packet header, restore the packet, and return it to the client directly after processing, without passing through the load balancer.Note that since RS needs to restore the packets sent by the load balancer, it must support the IPTUNNEL protocol, so the option to support IPTUNNEL must be compiled in the RS kernel
 Advantages: Load balancer is only responsible for distributing request packages to back-end node servers, and RS sends reply packages directly to users. Therefore, it reduces the large data flow of the load balancer. Load balancer is no longer the bottleneck of the system and can handle a large number of requests. A load balancer can distribute for many RS.And you can distribute in different regions by running on the public network.
Disadvantages: Tunnel-mode RS nodes require legitimate IP, which requires all servers to support the IP Tunneling protocol. Servers may be limited to some Linux systems
 3. Direct Routing Mode (VS-DR)
Principle: Load Balancer and RS both use the same IP to serve external services, but only DR responds to ARP requests. All RSs remain silent to ARP requests for their own IP. In other words, gateways will direct all requests for this service IP to DR, and after DR receives a packet, it will find out the corresponding RS according to the scheduling algorithm.Change the destination MAC address to the MAC of RS (because IP is consistent) and distribute the request to this RS When RS receives the packet, after processing, because IP is consistent, returning the data directly to the client is equivalent to receiving the packet directly from the client and returning it directly to the client after processing
 Since the load balancer has to change the two-tier headers, it is also easy to understand that the load balancer and RS must be in a broadcast domain and on the same switch.
Advantages: Like TUN (Tunnel Mode), load balancers only distribute requests, and reply packages are returned to clients via a separate routing method. VS-DR implementation does not require a tunneling structure, so most operating systems can be used as physical servers.
Disadvantages: (Not to mention a disadvantage, but an insufficiency) Requires that the network card of the load balancer must be on a physical segment with the physical network card.
4. fullnat mode
 lvs-fullnat: Forward by simultaneously modifying the source and destination IP addresses of the request message
 (1) VIP is a public network address, RIP and DIP are private network addresses and are usually not on the same IP network; therefore, RIP gateways generally do not point to DIP
 (2) The request message source address that RS receives is DIP, so it only needs to respond to DIP; however, Director sends it to Client
 (3) Request and response messages are sent via Director
 (4) Support port mapping
 Note: This type of kernel s is not supported by default

2. Describe how LVS-DR works and configure the implementation

Principle: Load Balancer and RS both use the same IP to serve external services, but only DR responds to ARP requests. All RSs remain silent to ARP requests for their own IP. In other words, gateways will direct all requests for this service IP to DR, and after DR receives a packet, it will find out the corresponding RS according to the scheduling algorithm.Change the destination MAC address to the MAC of RS (because IP is consistent) and distribute the request to this RS When RS receives the packet, after processing, because IP is consistent, returning the data directly to the client is equivalent to receiving the packet directly from the client and returning it directly to the client after processing
 Since the load balancer has to change the two-tier headers, it is also easy to understand that the load balancer and RS must be in a broadcast domain and on the same switch.
Planning: c1,c2,c3,c4 are centos7.6
 c1 client
c2  vs
c3  web1
c4  web2
2.1 install web The server
[root@c3 ~]# yum install httpd -y
[root@c3 ~]# echo rs1 > /var/www/html/index.html
[root@c3 ~]# systemctl start httpd
[root@c3 ~]# curl c3
rs1

[root@c4 ~]# yum install httpd -y
[root@c4 ~]# echo rs2 > /var/www/html/index.html
[root@c4 ~]# systemctl start httpd
[root@c4 ~]# curl c4
rs2
2.2 To configure rs The server
[root@c3 ~]# yum install net-tools -y   ###The net-tools package needs to be installed to use the ifconfig command
[root@c3 ~]# cat rs.sh 
#!/bin/bash
vip=10.0.0.100
mask='255.255.255.255'
dev=lo:1
case $1 in
start)
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ifconfig $dev $vip netmask $mask #broadcast $vip up
    #route add -host $vip dev $dev
    ;;
stop)
    ifconfig $dev down
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ;;
*)
    echo "Usage: $(basename $0) start|stop"
    exit 1
    ;;
esac
[root@c3 ~]# sh rs.sh start
[root@c3 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f1:37:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.244/24 brd 10.1.1.255 scope global noprefixroute dynamic eth0
       valid_lft 14582sec preferred_lft 14582sec
    inet6 fe80::5025:c937:77d0:2b28/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

[root@c4 ~]# yum install net-tools -y
[root@c4 ~]# sh rs.sh start
[root@c4 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:05:32:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.245/24 brd 10.1.1.255 scope global noprefixroute dynamic eth0
       valid_lft 16671sec preferred_lft 16671sec
    inet6 fe80::96c3:3cc3:b39e:dee3/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
2.3 To configure vs The server
[root@c2 ~]# yum install -y ipvsadm  ###Install lvs package
[root@c2 ~]# cat vs.sh 
#!/bin/bash
vip='10.0.0.100'
iface='lo:1'
mask='255.255.255.255'
port='80'
rs1='10.1.1.244'
rs2='10.1.1.245'
scheduler='rr'      ###To test for easy results, rr polling algorithm is used
type='-g'
case $1 in
start)
    ifconfig $iface $vip netmask $mask #broadcast $vip up
    iptables -F
    ipvsadm -A -t ${vip}:${port} -s $scheduler
    ipvsadm -a -t ${vip}:${port} -r ${rs1} $type -w 1
    ipvsadm -a -t ${vip}:${port} -r ${rs2} $type -w 1
    ;;
stop)
    ipvsadm -C
    ifconfig $iface down
    ;;
*)
    echo "Usage $(basename $0) start|stop"
    exit 1
esa
[root@c2 ~]# sh vs.sh start
[root@c2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
[root@c2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.100:80 rr
  -> 10.1.1.244:80                Route   1      0          0         
  -> 10.1.1.245:80                Route   1      0          0
2.4 Test:
[root@c1 ~]# route -n     ###No route to 10.0.0.0 segment
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.1.1.254      0.0.0.0         UG    100    0        0 eth0
10.1.1.0        0.0.0.0         255.255.255.0   U     100    0        0 eth0
[root@c1 ~]# curl 10.0.0.100
^C
[root@c1 ~]# route add -host 10.0.0.100 dev eth0  ###Add routing to 10.0.0.0 segment
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2

3. Implement high availability of LVS+Keepalived

keepalive uses vrrp (Virtual Routing Protocol) for load redundancy.A router claims to be the master router through periodic multicast announcements, and then compares its priority with that of routers in the network to select the master and standby routers.The primary router provides the corresponding routing function, the standby router selects a new primary router to provide services compared with the priority again when the primary router fails, and the rest become backup routers.This scenario is based on subsection 2, where c2 is the primary router and c5 is the standby router
3.1 Realization c2 and c5 Mutually Secret Login
[root@c2 ~]# ssh-keygen -t rsa -P "" ###Generate Key
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:C1wDPspULjsjJQs/hSUjac50V4BLQXCQFkwxbViT/DM root@c2
The key's randomart image is:
+---[RSA 2048]----+
|+O#*=.=.         |
|.Oo&.= .         |
|B * O + o        |
| = O E o .       |
|  = * = S        |
|   o o . .       |
|        .        |
|                 |
|                 |
+----[SHA256]-----+
[root@c2 ~]# ssh-copy-id c5 ###Transfer Key
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c5 (10.1.1.246)' can't be established.
ECDSA key fingerprint is SHA256:ilZ46J85JC8Xhr2dVvYsUxMGyj17SDhD6/JrhmNy6GY.
ECDSA key fingerprint is MD5:2f:c5:a9:d6:d7:5f:5e:4e:c3:94:7c:92:3a:d2:55:63.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c5's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c5'"
and check to make sure that only the key(s) you wanted were added.

[root@c2 ~]# ssh c5   ###Test Secret Login
Last login: Mon May 25 21:40:03 2020 from 192.168.10.45
[root@c5 ~]# 

[root@c5 ~]# ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:abVEGoN7+mbpGU0aZY4VssjdndMC+cjLZYK5Icy+S/U root@c5
The key's randomart image is:
+---[RSA 2048]----+
|       .+ +.     |
|     ..o O.+ o   |
|     oo.++Bo= .  |
|      = =X+.+o   |
|     . +S++=     |
|      oo.*o      |
|      .oo.E      |
|     .. =o       |
|      .=o        |
+----[SHA256]-----+
[root@c5 ~]# ssh-copy-id c2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c2 (10.1.1.243)' can't be established.
ECDSA key fingerprint is SHA256:dldJTKtxApZyQT/FT6WKQsqKgtf4cPuAxBTiLMFdxSk.
ECDSA key fingerprint is MD5:1a:07:07:69:3f:0e:94:b3:f3:c5:04:dc:73:6b:ba:3e.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c2'"
and check to make sure that only the key(s) you wanted were added.

[root@c5 ~]# ssh c2
Last login: Mon May 25 21:40:01 2020 from 192.168.10.45
[root@c2 ~]# 
3.2 Installation and Configuration keepalived
3.2.1 Clean up first c2 On ipvsadm strategy
[root@c2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.100:80 rr
  -> 10.1.1.244:80                Route   1      0          0         
  -> 10.1.1.245:80                Route   1      0          0         
[root@c2 ~]# ls
anaconda-ks.cfg  original-ks.cfg  vs.sh
[root@c2 ~]# sh vs.sh stop
[root@c2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

3.2.2 install keepalived Services, ipvsadm tool
[root@c2 ~]# yum install keepalived.x86_64 -y
[root@c5 ~]# yum install keepalived -y
[root@c5 ~]# yum install ipvsadm -y

[root@c2 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id node1
   vrrp_mcast_group4 224.0.100.100
}

vrrp_instance VI_1 {
    state MASTER
    interface bond0
    virtual_router_id 5
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.100/24 dev bond0 label bond0:0
    }
}

virtual_server 10.0.0.100 80 {
    delay_loop 1
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 10.1.1.244 80 {
        weight 1   ###Backend Service Detection
        HTTP_GET {          
        url {
            path /
            status_code 200
        }
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    real_server 10.1.1.245 80 {
        weight 1
        HTTP_GET {
                url {
                        path /
                        status_code 200
                }       
            } 
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }

    }
}
[root@c2 keepalived]# systemctl start keepalived
[root@c2 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:9e brd ff:ff:ff:ff:ff:ff
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:ba:03:94 brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.243/24 brd 10.1.1.255 scope global noprefixroute bond0
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/24 scope global bond0:0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feba:394/64 scope link 
       valid_lft forever preferred_lft forever
[root@c5 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id node2
   vrrp_mcast_group4 224.0.100.100
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 5
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.100/24 dev eth0 label eth0:0
    }
}

virtual_server 10.0.0.100 80 {
    delay_loop 1
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 10.1.1.244 80 {
        weight 1
        HTTP_GET {
        url {
            path /
            status_code 200
        }
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    real_server 10.1.1.245 80 {
        weight 1
        HTTP_GET {
                url {
                        path /
                        status_code 200
                }       
            } 
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }

    }
}
[root@c5 keepalived]# systemctl start keepalived.service

3.3 test
3.3.1 Test first lvs Is it working
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# 
3.3.2 Stop c2 On keepalived Service, then test if it is dispatched properly
[root@c2 keepalived]# systemctl stop keepalived

[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
rs1
[root@c1 ~]# curl 10.0.0.100
rs2
[root@c1 ~]# curl 10.0.0.100
3.3.3 Test Backend Server Health Check
[root@c3 ~]# systemctl stop httpd
[root@c1 ~]# while true;do curl 10.0.0.100;sleep 1;done
rs2
rs1
rs2
rs1
rs2
curl: (7) Failed connect to 10.0.0.100:80; Connection refused
rs2
curl: (7) Failed connect to 10.0.0.100:80; Connection refused
rs2
rs2
rs2
rs2
rs2
rs2
rs2

[root@c3 ~]# systemctl start httpd
[root@c1 ~]# while true;do curl 10.0.0.100;sleep 1;done
rs2
rs2
rs2
rs2
rs1
rs2
rs1
rs2
rs1
rs2
rs1
rs2
rs1

Posted by mdemetri2 on Mon, 25 May 2020 13:17:36 -0700