LVS Dual-machine Load Balancing Deployment Scheme Based on Centos 7.3.10-514

Keywords: yum CentOS network firewall

LVS Dual-machine Load Balancing Deployment Scheme Based on Centos 7.3.10-514
Host: 192.168.1.51
Standby: 192.168.1.52
LVS VIP: 192.168.1.50
0. Production system local CD-ROM yum source
Note: Use CentOS-7-x86_64-DVD-1611.iso file to upload to the system for completing the following installation steps without access to the public network. Step 0 can be ignored if direct access to the public network is possible.
Create ISO mount directory
mkdir  /media/cdrom
Mount ISO to/media/cdrom
mount -t iso9660 /root/CentOS-7-x86_64-DVD-1611.iso  /media/cdrom
Edit repo
vi /etc/yum.repos.d/CentOS-Media.repo
[c7-media]
name=CentOS-$releasever - Media
baseurl=file:///media/CentOS/
        file:///media/cdrom/
        file:///media/cdrecorder/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

Transfer all configuration files except CentOS-Media.repo in the / etc/yum.repos.d directory to other backup paths.
Only the following documents are retained:
ls  /etc/yum.repos.d
CentOS-Media.repo
Update the yum cache:
yum clean all
yum makecache
Execute a search command at will to verify that the service is available:
yum search ssh
Note: The following steps are validated on the basis of using only the local yum source of step 0.
1. Basic software packages
yum -y install gcc gcc-c++ make popt popt-devel libnl libnl-devel popt-static openssl-devel kernel-devel
Create a symbolic link to the kernel information as follows:
ln -s /usr/src/kernels/3.10.0-514.el7.x86_64 /usr/src/linux
2. Installation of LVS Software
# yum -y install ipvsadm
# ipvsadm -v
ipvsadm v1.27 2008/5/15 (compiled with popt and IPVS v1.2.1)
3. Install keepalived software
# yum install keepalived
# keepalived -v
Keepalived v1.2.13 (05/25,2017)
The following are the dependencies when installing keepalived:
Installed:
  keepalived.x86_64 0:1.2.13-9.el7_3
Dependency Installed:
  lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7  net-snmp-agent-libs.x86_64 1:5.7.2-24.el7_3.2  net-snmp-libs.x86_64 1:5.7.2-24.el7_3.2

4. System Firewall Configuration
Modify the configuration of iptables, release the incoming flow of standby machine on the host, and make similar settings on standby machine:
firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --in-interface enp0s3 --destination 224.0.0.18 --protocol vrrp -j ACCEPT
firewall-cmd --reload

5. Configure / etc/keepalived/keepalived.conf file
Note: On standby, only router_id and priority values are different for this configuration file. The other values are consistent.
The following are examples of configuration on the host:
global_defs {
   notification_email {
     #system@hongshutech.com
   }
   notification_email_from lvs@baiwutong.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_51
}
vrrp_instance VI_1 {
    state BACKUP
    nopreempt
    interface enp0s3
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1207
    }
    virtual_ipaddress {
        192.168.1.50
    }
}
virtual_server 192.168.1.50 8888 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 2
    protocol TCP
     real_server 192.168.1.61 8888 {
     weight 100
     TCP_CHECK {
         connect_timeout 3
         nb_get_retry 3
         delay_before_retry 3
         connect_port 8855
        }
    }
     real_server 192.168.1.62 8888 {
     weight 100
     TCP_CHECK {
         connect_timeout 3
         nb_get_retry 3
         delay_before_retry 3
         connect_port 8855
        }
    }
}
virtual_server 192.168.1.50 8080 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 2
    protocol TCP
     real_server 192.168.1.61 8080 {
     weight 100
     TCP_CHECK {
         connect_timeout 3
         nb_get_retry 3
         delay_before_retry 3
         connect_port 8080
        }
    }
     real_server 192.168.1.62 8080 {
     weight 100
     TCP_CHECK {
         connect_timeout 3
         nb_get_retry 3
         delay_before_retry 3
         connect_port 8080
        }
    }
  
}

Start the keepalived service
# systemctl start keepalived
View the keepalived service status
# systemctl status keepalived
Set to self-start with the system:
# systemctl enable keepalived
Check whether the system network card VIP is in effect:
# ip a
Grab the package to see if there are VRRP packages sent by the host at regular intervals.
# tcpdump -p vrrp -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 65535 bytes
10:16:13.375399 IP 192.168.1.51 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 100, authtype simple, intvl 1s, length 20
10:16:14.376542 IP 192.168.1.51 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 100, authtype simple, intvl 1s, length 20
10:16:15.377596 IP 192.168.1.51 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 100, authtype simple, intvl 1s, length 20
10:16:16.378590 IP 192.168.1.51 > 224.0.0.18: VRRPv2, Advertisement, vrid 50, prio 100, authtype simple, intvl 1s, length 20

The main switch and VIP address drift of keepalived are tested.
# systemctl stop keepalived
# systemctl start keepalived
Note: In order to reduce the impact of automatic service preemption on business, LVS services are configured to run in non-preemptive mode.
6. Configuration of Load Balancing Backend Application Host
When using DR mode, upload scripts lvs_real_server.sh to/usr/local/src on each application host providing back-end services and set:
chmod 700 /usr/local/src/lvs_real_server.sh
echo "/usr/local/src/lvs_real_server.sh start" >> /etc/rc.d/rc.local
/usr/local/src/lvs_real_server.sh  start

The following is the content of lvs_real_server.sh. Note to update the VIP parameter configuration:
vi  lvs_real_server.sh
#!/bin/bash
#written by Daniel on 2014/02/19
#version 1.0
VIP=192.168.1.50
. /etc/rc.d/init.d/functions
case "$1" in
start)
        ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
        route add -host $VIP dev lo:0
        echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
        echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
        sysctl -p > /dev/null 2>&1
        echo "Real Server Start OK"
        ;;
stop)
        ifconfig lo:0 down
        route del $VIP > /dev/null 2>&1
        echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce
        echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
        echo "Real Server Stoped"
        ;;
*)
        echo "Usage: $0 {start|stop}"
        exit 1
esac
exit 0

  1. Method of Testing Backend Load Balancing Service
After the LVS deployment is completed, the back-end service providers may not be online yet. At this time, it is necessary to test the load balancing function of LVS in advance. The method is to temporarily start an http server on each back-end node using the following command, and set the service port to the port number corresponding to the LVS load balancing service. After use, stop the command.
# python -m SimpleHTTPServer 888
Serving HTTP on 0.0.0.0 port 8888...

Execute the following commands on the LVS host to view load balancing statistics:

# ipvsadm -L -n




Posted by rake on Thu, 20 Jun 2019 17:06:20 -0700