saltstack for highly available keepalived building

Keywords: vim Nginx saltstack Programming

In the following articles, I will refer to my blog for the construction of nginx based on saltstack
The experimental topology and related file directories are as follows:

Write the maintained installation script

 [root@server1 ~]# vim /srv/salt/keepalived/install.sls
 include:
   - pkgs.make   #  This includes the usual installation dependencies


 kp.install:     # Source code compilation of keepalived
   file.managed:     # File plus management, push source package to client host
     - name: /mnt/keepalived-2.0.6.tar.gz
     - source: salt://keepalived/files/keepalived-2.0.6.tar.gz

   cmd.run:     # Execute decompress and compile commands
     - name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd keepalived-2.0.6 && ./configure --prefix=/usr/local/keepalived --with-init=SYSV &>/dev/null && make &>/dev/null && make install &>/dev/null
     - create: /usr/local/keepalived
                # Once this file is available in the client host, the compilation process will not be repeated
 /etc/keepalived:    # Create the maintained profile directory
   file.directory:
     - mode: 755

 /etc/sysconfig/keepalived:    # Establish a soft connection,
   file.symlink:
     - target: /usr/local/keepalived/etc/sysconfig/keepalived

 /sbin/keepalived:
   file.symlink:
     - target: /usr/local/keepalived/sbin/keepalived

Write the maintained service script

 include:
   - keepalived.install   # This script contains the contents of install

 /etc/keepalived/keepalived.conf:     # Files to synchronize in minion
   file.managed:          # File management module
     - source: salt://keepalived/files/keepalived.conf   # Source files on server
     - template: jinja    # jinja module
     - context:
         STATE: {{ pillar['state'] }}
         VRID: {{ pillar['vrid'] }}
         PRIORITY: {{ pillar['priority'] }}

 kp-service:          
   file.managed:       # File management, managing keepalived startup scripts
     - name: /etc/init.d/keepalived
     - source: salt://keepalived/files/keepalived
     - mode: 755
   service.running:
     - name: keepalived
     - reload: True
     - watch:          # Monitoring the keepalived profile
       - file: /etc/keepalived/keepalived.conf

Preparation of keepalived profile template

 [root@server1 ~]# vim /srv/salt/keepalived/files/keepalived.conf
 ! Configuration File for keepalived

  global_defs {
    notification_email {
         root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id LVS_DEVEL
    vrrp_skip_check_adv_addr
 #   vrrp_strict
    vrrp_garp_interval 0
    vrrp_gna_interval 0
 }

 vrrp_instance VI_1 {
     state {{ STATE }}     # Call variables in the maintained.service script
     interface eth0
     virtual_router_id {{ VRID }}  # Call variables in the maintained.service script
     priority {{ PRIORITY }}   # Call variables in the maintained.service script
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 1111
     }
     virtual_ipaddress {
         172.25.21.100     # Set vip
     }
 }

The programming of pillor in service script

 [root@server1 ~]# vim /srv/pillar/keepalived/install.sls
 {% if grains['fqdn'] == 'server1' %}
 webserver: keepalived
 state: MASTER      # Define the state of server1
 vrid: 21           # Virtual id
 priority: 100      # priority
 {% elif grains['fqdn'] == 'server4' %}
 webserver: keepalived
 state: BACKUP
 vrid: 21
 priority: 50
 {% endif %}

Write the pillar top file

 [root@server1 ~]# vim /srv/pillar/top.sls
 base:
  '*':
    - web.install
    - keepalived.install   # Declare globally

Writing of salt top file

 [root@server1 ~]# vim /srv/salt/top.sls
 base:
   'server1':                   # Declaration in top file
     - haproxy.install
     - keepalived.service       
   'server4':
     - haproxy.install
     - keepalived.service
   'server2':
     - apache.service
   'server3':
     - nginx.service

After editing, push to each host

 [root@server1 ~]# salt '*' state.highstate

Above, the simple http server based on keepalived + haproxy high available load balancing cluster has been built automatically. If you want to expand your business, just add the minion host and modify the corresponding file script

High availability for haproxy

The above cluster we built is flawed, and there is no health check for haproxy. If haproxy goes down and the maintained service starts normally, it is still unable to achieve load balancing for the back-end web server

To solve this problem, we can add the monitoring script of haproxy for keepalived
The content of the script is as follows: (a very rough script)

 [root@server1 files]# vim check_haproxy.sh
 #!/bin/bash

 /etc/init.d/haproxy status &> /dev/null || /etc/init.d/haproxy  restart &> /dev/null
 #Check the status of haproxy. If an error is returned, restart haproxy
 if [ $? -ne 0 ];
 then /etc/init.d/keepalived stop &> /dev/null
 fi
 If the return value is not 0 after restart, stop the keepalived service

Edit the keepalived template file and push the content to each keepalived host

 stay keealived.conf Start with monitoring script
 vrrp_script check_haproxy {
         script "/etc/keepalived/check_haproxy.sh"
         # Script absolute path
         interval 2     
         # Script refresh time      
         weight 2
 }
 ......ellipsis......
     virtual_ipaddress {
         172.25.21.100
     }
     track_script {
         check_haproxy   # Add script here to call
     }

Push this script to each keepalived host

[root@server1 keepalived]# salt '*' state.highstate
server2:
----------
......ellipsis......
Summary for server2
------------
Succeeded: 2            # server2 push succeeded
Failed:    0
------------
Total states run:     2
Total run time: 495.607 ms
server3:
----------
......ellipsis......
Summary for server3
------------
Succeeded: 9            # server3 push succeeded
Failed:    0
------------
Total states run:     9
Total run time:   1.373 s
server4:
----------
......ellipsis......
Summary for server4     # server4 push succeeded
-------------
Succeeded: 13 (changed=4)
Failed:     0
-------------
Total states run:     13
Total run time:   10.100 s
server1:
----------

......ellipsis......
Summary for server1
-------------
Succeeded: 13 (changed=3)   # server1 push succeeded
Failed:     0
-------------
Total states run:     13
Total run time:   10.529 s

Posted by crishna369 on Fri, 03 Jan 2020 08:39:43 -0800