Centos 6.9 Load Balancing Scheme Complete Configuration (lvs+keepalived+pxc+nfs+business system)

Keywords: Linux MySQL SELinux yum Database

Preliminary preparation:

NFS server: computer name nfsserver, IP address 192.168.1.103, used to store business system data.
Noe1: Computer name PXC01, IP address 192.168.1.105, installation of pxc system and business system.
Noe2: Computer name PXC02, IP address 192.168.1.106, installation of pxc system and business system.
Noe3: Computer name PXC03, IP address 192.168.1.107, installation of pxc system and business system.
lvs server: computer name lvsserver, IP address 192.168.1.121, vip 192.168.1.100, install lvs for load balancing.
Operating systems are: Centos 6.9 64 bits

Note: Section 5 introduces two lvs servers using keepalived for high availability and health checks.




Firstly, install business system and configure mysql load balancing (PXC scheme)

Preface--------------------------------------------------------------------------------------------------------------------------------------------------------------------
The following objectives need to be achieved:
1. Implementing the same data for multiple mysql nodes
 2. Any mysql node hangs up, which does not affect the use of
 3. Each node provides services and reads and writes at the same time
 4. Consider future extensibility, such as increasing node convenience
 5. Full automation in complex situations without manual intervention
 Consider and try many schemes, and finally choose PXC scheme. The reasons why other schemes are not ideal are as follows:
(1) Master-slave replication scheme: This scheme can not meet the requirements, because if the slave node writes something, it will not synchronize to the master server.
(2) Master master replication scheme: it is not suitable for production environment, and it is troublesome to add a third node.
(3) Master-Slave Replication Extension Scheme (MMM, MHA): This scheme can not meet the requirements, because although the read server achieves load balancing, only one write server provides services.
(4) mysql is stored in shared storage and DRBD: it is not feasible, the nodes can not provide services at the same time, which belongs to HA level.
(5) Mysql Cluster Cluster Cluster Cluster: In a sense, only support "NDB data engine" (fragmentation to NDB, no need to change fragmentation), and do not support foreign keys, large disk and memory occupancy. It takes a long time to load data into memory when restarting. Deployment and management are complex.
(6) Mysql Fabric Cluster: There are two main functions, one is MySQL HA, the other is partitioning (i.e. a table on TB, partitioning it, and then storing part of it on each server), which is not suitable for the environment I need.
Preface--------------------------------------------------------------------------------------------------------------------------------------------------------------------
1,Environmental preparation( node1,node2,node3)
node1 192.168.1.105 PXC01 centos 6.9 mini
node2 192.168.1.106 PXC02 centos 6.9 mini
node3 192.168.1.107 PXC03 centos 6.9 mini


2,Close the firewall or selinux(node1,node2,node3)
[root@localhost ~]# /etc/init.d/iptables stop
[root@localhost ~]# chkconfig iptables off
[root@localhost ~]# setenforce 0
[root@localhost ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
#       targeted - Only targeted network daemons are protected.
#       strict - Full SELinux protection.
SELINUXTYPE=targeted


3,Install the business system and upgrade to the latest version. node1,node2,node3)


4,Set all tables in the business system as primary keys. node1)
//The command for the table that does not find the primary key is:
select t1.table_schema,t1.table_name from information_schema.tables t1   
left outer join  
information_schema.TABLE_CONSTRAINTS t2     
on t1.table_schema = t2.TABLE_SCHEMA  and t1.table_name = t2.TABLE_NAME  and t2.CONSTRAINT_NAME in  
('PRIMARY')   
where t2.table_name is null and t1.TABLE_SCHEMA not in ('information_schema','performance_schema','test','mysql', 'sys');

//The command to look up the table with the primary key is:
select t1.table_schema,t1.table_name from information_schema.tables t1   
left outer join  
information_schema.TABLE_CONSTRAINTS t2     
on t1.table_schema = t2.TABLE_SCHEMA  and t1.table_name = t2.TABLE_NAME  and t2.CONSTRAINT_NAME in  
('PRIMARY')   
where t2.table_name is not null and t1.TABLE_SCHEMA not in ('information_schema','performance_schema','test','mysql', 'sys');

//Set the primary key (node1) according to the following template
ALTER TABLE `Table Name` ADD `id` int(11) NOT NULL auto_increment FIRST, ADD primary key(id);


5,Make Business Systems Non-Business Systems innodb The engine table is modified to innodb Engine table node1)
//See which tables use the MyISAM engine:
[root@PXC01 ~]#mysql-u oa-p password-e "show table status from OA where Engine='MyISAM';"
[root@PXC01 ~]#mysql-u oa-p password-e "show table status from OA where Engine='MyISAM';" | awk'{print $1}' | sed 1D > mysqlchange

//Stop other services that use mysql:
[root@PXC01 ~]#for I in server name 1 server name 2;do/etc/init.d/$i stop done;

//Execute scripts that change to innodb engine:
[root@PXC01 ~]# cat mysqlchange_innodb.sh
#! /bin/bash
cat mysqlchange | while read LINE
do
tablename=$(echo $LINE |awk  '{print $1}')
echo "Now amend $tablename The engine is innodb"
mysql -u oa -p`Password | grep -w pass | awk -F"= " '{print $NF}'` oa -e "alter table $tablename engine=innodb;"
done

//Verification:
[root@PXC01 ~]#mysql-u oa-p `password | grep-w pass | awk-F'='{print $NF}'`- e `show table status from OA where Engine='MyISAM';'
[root@PXC01 ~]#mysql-u oa-p `password | grep-w pass | awk-F'='{print $NF}'`- e `show table status from OA where Engine='innoDB';'


6,Back up the database ( node1)
[root@PXC01 ~]#mysqldump-u oa-p `password | grep-w pass | awk-F'='{print $NF}'- - databases OA | gzip > 20180524. sql.gz
[root@PXC01 ~]# ll
//Total dosage 44
-rw-r--r--  1 root root 24423 5 month  22 16:55 20180524.sql.gz


7,Bringing the business system with it mysql Change the port to another port and stop the service. node1,node2,node3)


8,yum install pxc Configuration
//Infrastructure Installation (node1, node2, node3)
[root@percona1 ~]# yum -y groupinstall Base Compatibility libraries Debugging Tools Dial-up Networking suppport Hardware monitoring utilities Performance Tools Development tools
   
//Component installation (node1, node2, node3)
[root@percona1 ~]# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm -y
[root@percona1 ~]# yum localinstall http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@percona1 ~]# yum install socat libev -y
[root@percona1 ~]# yum install Percona-XtraDB-Cluster-55 -y

node1 To configure:
[root@PXC01 ~]# vi /etc/my.cnf
#My business system needs 6033 ports
[client]
port=6033
#My business system needs 6033 ports
[mysqld]
datadir=/var/lib/mysql
user=mysql
port=6033
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.1.105,192.168.1.106,192.168.1.107
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=192.168.1.105
# SST method
wsrep_sst_method=xtrabackup-v2
# Cluster name
wsrep_cluster_name=my_centos_cluster
# Authentication for SST method
wsrep_sst_auth="sstuser:s3cret"

node1 start-up mysql The order is as follows:
[root@PXC01 mysql]# /etc/init.d/mysql bootstrap-pxc
Bootstrapping PXC (Percona XtraDB Cluster) ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists
Starting MySQL (Percona XtraDB Cluster)... SUCCESS!

//If centos7, the startup command is as follows:
[root@percona1 ~]# systemctl start mysql@bootstrap.service

//If you reboot, kill first, then delete the pid file and then execute the above startup command.

node1 View Service:
[root@PXC01 ~]# /etc/init.d/mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (7712)
//Add / etc/init.d/mysql bootstrap-pxc to rc.local.

node1 Get into mysql Console:
[root@PXC01 ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.5.41-37.0-55 Percona XtraDB Cluster (GPL), Release rel37.0, Revision 855, WSREP version 25.12, wsrep_25.12.r4027
Copyright (c) 2009-2014 Percona LLC and/or its affiliates
Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 |
| wsrep_protocol_version     | 4                                    |
| wsrep_last_committed       | 0                                    |
| wsrep_replicated           | 0                                    |
| wsrep_replicated_bytes     | 0                                    |
| wsrep_received             | 2                                    |
| wsrep_received_bytes       | 134                                  |
| wsrep_local_commits        | 0                                    |
| wsrep_local_cert_failures  | 0                                    |
| wsrep_local_replays        | 0                                    |
| wsrep_local_send_queue     | 0                                    |
| wsrep_local_send_queue_avg | 0.000000                             |
| wsrep_local_recv_queue     | 0                                    |
| wsrep_local_recv_queue_avg | 0.000000                             |
| wsrep_flow_control_paused  | 0.000000                             |
| wsrep_flow_control_sent    | 0                                    |
| wsrep_flow_control_recv    | 0                                    |
| wsrep_cert_deps_distance   | 0.000000                             |
| wsrep_apply_oooe           | 0.000000                             |
| wsrep_apply_oool           | 0.000000                             |
| wsrep_apply_window         | 0.000000                             |
| wsrep_commit_oooe          | 0.000000                             |
| wsrep_commit_oool          | 0.000000                             |
| wsrep_commit_window        | 0.000000                             |
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
| wsrep_cert_index_size      | 0                                    |
| wsrep_causal_reads         | 0                                    |
| wsrep_incoming_addresses   | 192.168.1.105:3306                   |
| wsrep_cluster_conf_id      | 1                                    |
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_state_uuid   | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
| wsrep_local_bf_aborts      | 0                                    |
| wsrep_local_index          | 0                                    |
| wsrep_provider_name        | Galera                               |
| wsrep_provider_vendor      | Codership Oy <info@codership.com>    |
| wsrep_provider_version     | 2.12(r318911d)                       |
| wsrep_ready                | ON                                   |
| wsrep_thread_count         | 2                                    |
+----------------------------+--------------------------------------+
41 rows in set (0.00 sec)

node1 Setting database username password
mysql> UPDATE mysql.user SET password=PASSWORD("Passw0rd") where user='root';
   
node1 Create, authorize, synchronize accounts
mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cret';
mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql> FLUSH PRIVILEGES;

node1 View the cluster you specified IP address
mysql> SHOW VARIABLES LIKE 'wsrep_cluster_address';
+-----------------------+---------------------------------------------------+
| Variable_name         | Value                                             |
+-----------------------+---------------------------------------------------+
| wsrep_cluster_address | gcomm://192.168.1.105,192.168.1.106,192.168.1.107 |
+-----------------------+---------------------------------------------------+
1 row in set (0.00 sec)

node1 See if this parameter is turned on
mysql> show status like 'wsrep_ready';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wsrep_ready   | ON    |
+---------------+-------+
1 row in set (0.00 sec)

node1 View the number of members of the cluster
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 1     |
+--------------------+-------+
1 row in set (0.00 sec)

node1 See wsrep Relevant parameters
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 |
| wsrep_protocol_version     | 4                                    |
| wsrep_last_committed       | 2                                    |
| wsrep_replicated           | 2                                    |
| wsrep_replicated_bytes     | 405                                  |
| wsrep_received             | 2                                    |
| wsrep_received_bytes       | 134                                  |
| wsrep_local_commits        | 0                                    |
| wsrep_local_cert_failures  | 0                                    |
| wsrep_local_replays        | 0                                    |
| wsrep_local_send_queue     | 0                                    |
| wsrep_local_send_queue_avg | 0.000000                             |
| wsrep_local_recv_queue     | 0                                    |
| wsrep_local_recv_queue_avg | 0.000000                             |
| wsrep_flow_control_paused  | 0.000000                             |
| wsrep_flow_control_sent    | 0                                    |
| wsrep_flow_control_recv    | 0                                    |
| wsrep_cert_deps_distance   | 1.000000                             |
| wsrep_apply_oooe           | 0.000000                             |
| wsrep_apply_oool           | 0.000000                             |
| wsrep_apply_window         | 0.000000                             |
| wsrep_commit_oooe          | 0.000000                             |
| wsrep_commit_oool          | 0.000000                             |
| wsrep_commit_window        | 0.000000                             |
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
| wsrep_cert_index_size      | 2                                    |
| wsrep_causal_reads         | 0                                    |
| wsrep_incoming_addresses   | 192.168.1.105:3306                   |
| wsrep_cluster_conf_id      | 1                                    |
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_state_uuid   | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
| wsrep_local_bf_aborts      | 0                                    |
| wsrep_local_index          | 0                                    |
| wsrep_provider_name        | Galera                               |
| wsrep_provider_vendor      | Codership Oy <info@codership.com>    |
| wsrep_provider_version     | 2.12(r318911d)                       |
| wsrep_ready                | ON                                   |
| wsrep_thread_count         | 2                                    |
+----------------------------+--------------------------------------+
41 rows in set (0.00 sec)


node2 To configure:
[root@PXC02 ~]# vi /etc/my.cnf

//My business system needs 6033 ports
[client]
port=6033

//My business system needs 6033 ports
[mysqld]
datadir=/var/lib/mysql
user=mysql
port=6033
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.1.105,192.168.1.106,192.168.1.107
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=192.168.1.106
# SST method
wsrep_sst_method=xtrabackup-v2
# Cluster name
wsrep_cluster_name=my_centos_cluster
# Authentication for SST method
wsrep_sst_auth="sstuser:s3cret"

node2 Start up service:
[root@PXC02 ~]# /etc/init.d/mysql start
ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists
Starting MySQL (Percona XtraDB Cluster).....State transfer in progress, setting sleep higher
... SUCCESS!

node2 View Service:
[root@PXC02 ~]# /etc/init.d/mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (9071)


node3 To configure:
[root@PXC03 ~]# vi /etc/my.cnf

//My business system needs 6033 ports
[client]
port=6033

//My business system needs 6033 ports
[mysqld]
datadir=/var/lib/mysql
user=mysql
port=6033
# Path to Galera library
wsrep_provider=/usr/lib64/libgalera_smm.so
# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.1.105,192.168.1.106,192.168.1.107
# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW
# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
# Node #1 address
wsrep_node_address=192.168.1.107
# SST method
wsrep_sst_method=xtrabackup-v2
# Cluster name
wsrep_cluster_name=my_centos_cluster
# Authentication for SST method
wsrep_sst_auth="sstuser:s3cret"

node3 Start up service:
[root@PXC03 ~]# /etc/init.d/mysql start
ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists
Starting MySQL (Percona XtraDB Cluster)......State transfer in progress, setting sleep higher
.... SUCCESS!

node3 View Service:
[root@PXC03 ~]# /etc/init.d/mysql status
SUCCESS! MySQL (Percona XtraDB Cluster) running (9071)
..................................Be careful................................
-> Except nominal master Others node Nodes only need to start mysql All right.
-> Node database login and master The user name and password of the node are identical and synchronized automatically. So the user name password of other node databases need not be reset.
   //That is to say, in the above settings, you only need to set permissions on the nominal master node (such as node1 above). After the other nodes are configured / etc/my.cnf, you just need to start mysql, and the permissions will automatically synchronize.
   //For the node2 and node3 nodes above, the right to login mysql is the same as node1 (i.e. to login with the right set by node1).
.....................................................................
//If the above node2 and node3 fail to start mysql, for example, err log error under / var/lib/mysql is as follows:
[ERROR] WSREP: gcs/src/gcs_group.cpp:long int gcs_group_handle_join_msg(gcs_
//Solution:
-> View the iptables Whether the firewall is closed or not; check for nominal master Is the 4567 port on the node connected? telnet)
-> selinux Whether to close or not
-> Delete nominal master Node grastate.dat After that, restart nominally master The database of the node; of course, on the current node grastate.dat Also delete and restart the database
.....................................................................
   



9,Finally, test it.
//On any node, adding, deleting and modifying operations will be synchronized to other servers, which is now the main mode. Of course, the table engine must be innodb, because galera only supports InnoDB tables at present.
mysql> show status like 'wsrep%';
+----------------------------+----------------------------------------------------------+
| Variable_name              | Value                                                    |
+----------------------------+----------------------------------------------------------+
| wsrep_local_state_uuid     | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8                     |
| wsrep_protocol_version     | 4                                                        |
| wsrep_last_committed       | 2                                                        |
| wsrep_replicated           | 2                                                        |
| wsrep_replicated_bytes     | 405                                                      |
| wsrep_received             | 10                                                       |
| wsrep_received_bytes       | 728                                                      |
| wsrep_local_commits        | 0                                                        |
| wsrep_local_cert_failures  | 0                                                        |
| wsrep_local_replays        | 0                                                        |
| wsrep_local_send_queue     | 0                                                        |
| wsrep_local_send_queue_avg | 0.000000                                                 |
| wsrep_local_recv_queue     | 0                                                        |
| wsrep_local_recv_queue_avg | 0.000000                                                 |
| wsrep_flow_control_paused  | 0.000000                                                 |
| wsrep_flow_control_sent    | 0                                                        |
| wsrep_flow_control_recv    | 0                                                        |
| wsrep_cert_deps_distance   | 0.000000                                                 |
| wsrep_apply_oooe           | 0.000000                                                 |
| wsrep_apply_oool           | 0.000000                                                 |
| wsrep_apply_window         | 0.000000                                                 |
| wsrep_commit_oooe          | 0.000000                                                 |
| wsrep_commit_oool          | 0.000000                                                 |
| wsrep_commit_window        | 0.000000                                                 |
| wsrep_local_state          | 4                                                        |
| wsrep_local_state_comment  | Synced                                                   |
| wsrep_cert_index_size      | 0                                                        |
| wsrep_causal_reads         | 0                                                        |
| wsrep_incoming_addresses   | 192.168.1.105:6033,192.168.1.106:6033,192.168.1.107:6033 |
| wsrep_cluster_conf_id      | 3                                                        |
| wsrep_cluster_size         | 3                                                        |
| wsrep_cluster_state_uuid   | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8                     |
| wsrep_cluster_status       | Primary                                                  |
| wsrep_connected            | ON                                                       |
| wsrep_local_bf_aborts      | 0                                                        |
| wsrep_local_index          | 0                                                        |
| wsrep_provider_name        | Galera                                                   |
| wsrep_provider_vendor      | Codership Oy <info@codership.com>                        |
| wsrep_provider_version     | 2.12(r318911d)                                           |
| wsrep_ready                | ON                                                       |
| wsrep_thread_count         | 2                                                        |
+----------------------------+----------------------------------------------------------+
41 rows in set (0.00 sec)           
//Create a library on node3
mysql> create database wangshibo;
Query OK, 1 row affected (0.02 sec)

//Then check it on Noe1 and Noe2 and synchronize it automatically.
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
| wangshibo          |
+--------------------+
5 rows in set (0.00 sec)

//Create tables and insert data under the wangshibo library on node1
mysql> use wangshibo;
Database changed
mysql> create table test(id int(5));
Query OK, 0 rows affected (0.11 sec)
mysql> insert into test values(1);
Query OK, 1 row affected (0.01 sec)
mysql> insert into test values(2);
Query OK, 1 row affected (0.02 sec)

//Similarly, viewing on other nodes can automatically synchronize
mysql> select * from wangshibo.test;
+------+
| id   |
+------+
|    1 |
|    2 |
+------+
2 rows in set (0.00 sec)


10,node1 Restore database(node1)
mysql> create database oa;
gunzip 20180524.sql.gz
/usr/bin/mysql -uroot -p Password oa <20180524.sql

for i in Server name 1 Server name 2;do /etc/init.d/$i start;done


11,Then validate node2 and node3 Whether from node1 Synchronized database and all tables



II. NFS Server Configuration

Preface--------------------------------------------------------------------------------------------------------------------------------------------------------------------
This step is actually to store data.
To achieve three node servers using the same data, the methods are generally as follows:
(1) NFS: All data is stored on an NFS server, because NFS can place locks on the NFS server side, which will not cause simultaneous reading and writing of files and damage to files.
(2) Storage devices that support simultaneous reading and writing of a file: devices are expensive.
(3) Distributed file systems, such as Fasthds and HDFS, have not been studied in depth and may need to change business code.
Preface--------------------------------------------------------------------------------------------------------------------------------------------------------------------
1. Installing nfs related packages
[root@nfsserver ~]# yum -y install nfs-utils rpcbind


2. Establish shared directories and users, and give directory users and groups permissions (which users and groups depend on business system requirements).
[root@nfsserver ~]# groupadd -g 9005 oa
[root@nfsserver ~]# useradd -u 9005 -g 9005 oa -d /home/oa -s /sbin/nologin
[root@nfsserver ~]# mkdir /data
[root@nfsserver ~]# chown -hR oa.oa /data
 [root@nfsserver~] # chmod-R 777/data (or 755)


3. Establish exports file, rw means read and write.
[root@nfsserver /]# cat /etc/exports
/data 192.168.1.105(rw,sync,all_squash,anonuid=9005,anongid=9005)
/data 192.168.1.106(rw,sync,all_squash,anonuid=9005,anongid=9005)
/data 192.168.1.107(rw,sync,all_squash,anonuid=9005,anongid=9005)


4. Restart service and setup service start-up, must restart rpcbind before restarting nfs.
[root@nfsserver /]# /etc/init.d/rpcbind start
 rpcbind is starting: \\\\\\\\\\\\
[root@nfsserver /]# rpcinfo -p localhost
[root@nfsserver /]# netstat -lnt
[root@nfsserver /]# chkconfig rpcbind on
[root@nfsserver /]# chkconfig --list | grep rpcbind
[root@nfsserver /]# /etc/init.d/nfs start
 Start NFS Services: \\\\\\\\\\\\
Start NFS mountd: \\\\\\\\\\\\
Start the NFS daemon: \\\\\\\\\\\
RPC idmapd: \\\\\\\\\\\\
[root@nfsserver/] rpcinfo-p localhost will find many more ports
[root@nfsserver /]# chkconfig nfs on
[root@nfsserver /]# chkconfig --list | grep nfs


5. Close the firewall and selinux.
[root@nfsserver /]# service iptables stop
 iptables: Set the chain as policy
 iptables: Clear firewall rules: \\\\\\\\\\\
iptables: Uninstalling module:  OK \ OK uuuuuuuuuuuuu
[root@nfsserver /]# chkconfig iptables off
 [root@nfsserver/] selinux closure strategy.



3. NFS Client Configuration (Noe1, Noe2, Noe3)

1,look down NFS Server sharing ( node1,node2,node3)
[root@oaserver1 ~]#yum-y install nfs-utils rpcbind (showmount command to install this)
[root@oaserver1 /]# /etc/init.d/rpcbind start
[root@oaserver1 /]# rpcinfo -p localhost
[root@oaserver1 /]# netstat -lnt
[root@oaserver1 /]# chkconfig rpcbind on
[root@oaserver1 /]# chkconfig --list | grep rpcbind
[root@PXC01 ~]# showmount -e 192.168.1.103
Export list for 192.168.1.103:
/data    192.168.1.107,192.168.1.106,192.168.1.105


2,Create a new mount directory and mount ( node1,node2,node3)
[root@oaserver1 ~]# mkdir /data
[root@oaserver1 ~]# mount -t nfs 192.168.1.103:/data /data
[root@oaserver1 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       18G  3.6G   13G  22% /
tmpfs                 491M  4.0K  491M   1% /dev/shm
/dev/sda1             477M   28M  425M   7% /boot
192.168.1.103:/data
                       14G  2.1G   11G  16% /data
[root@oaserver1 ~]# cd /
[root@oaserver1 /]# ls -lhd data*
drwxrwxrwx 2 oa        oa        4.0K 5 month  24 2018 data


3,Set up boot-up automatic mounting ( node1,node2,node3)
[root@oaserver1 data]# vi /etc/fstab
192.168.1.103:/data /data               nfs     defaults        0 0


4,Mobile Data Directory to/data Directory
//The main step is to stop the relevant services, then move the data to the / data directory, and then make a soft connection.


5,Verification after restarting the server



IV. Load Balancing lvs Configuration (Single Load Balancing Server)

1,DS To configure
//Dependency packages required for installation
yum install -y wget make kernel-devel gcc gcc-c++ libnl* libpopt* popt-static

//Create a soft link to prevent later compilation and installation of ipvsadm from failing to find the system kernel
ln -s /usr/src/kernels/2.6.32-696.30.1.el6.x86_64/ /usr/src/linux

//Download and install ipvsadm
wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
tar zxvf ipvsadm-1.26.tar.gz
cd ipvsadm-1.26
make && make install
ipvsadm

//Create the file / etc/init.d/lvsdr and grant execution authority:
#!/bin/sh
VIP=192.168.1.100
RIP1=192.168.1.105
RIP2=192.168.1.106
RIP3=192.168.1.107
. /etc/rc.d/init.d/functions
case "$1" in
start)
  echo " start LVS  of DirectorServer"
  # set the Virtual  IP Address
   ifconfig eth0:0 $VIP/24
   #/sbin/route add -host $VIP dev eth0:0
   #Clear IPVS table
   /sbin/ipvsadm -c
  #set LVS
  /sbin/ipvsadm -A -t $VIP:80 -s sh
  /sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g
  /sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g
  /sbin/ipvsadm -a -t $VIP:80 -r $RIP3:80 -g
  /sbin/ipvsadm -A -t $VIP:25 -s sh
  /sbin/ipvsadm -a -t $VIP:25 -r $RIP1:25 -g
  /sbin/ipvsadm -a -t $VIP:25 -r $RIP2:25 -g
  /sbin/ipvsadm -a -t $VIP:25 -r $RIP3:25 -g
  /sbin/ipvsadm -A -t $VIP:110 -s sh
  /sbin/ipvsadm -a -t $VIP:110 -r $RIP1:110 -g
  /sbin/ipvsadm -a -t $VIP:110 -r $RIP2:110 -g
  /sbin/ipvsadm -a -t $VIP:110 -r $RIP3:110 -g
  /sbin/ipvsadm -A -t $VIP:143 -s sh
  /sbin/ipvsadm -a -t $VIP:143 -r $RIP1:143 -g
  /sbin/ipvsadm -a -t $VIP:143 -r $RIP2:143 -g
  /sbin/ipvsadm -a -t $VIP:143 -r $RIP3:143 -g
  #/sbin/ipvsadm -a -t $VIP:80 -r $RIP3:80 –g
  #Run LVS
  /sbin/ipvsadm
  #end
;;
stop)
echo "close LVS Directorserver"
/sbin/ipvsadm -c
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac

[root@lvsserver ~]# chmod +x /etc/init.d/lvsdr
[root@lvsserver ~]#/etc/init.d/lvsdr start
[root@lvsserver ~]#vi/etc/rc.local added to boot-up self-start
/etc/init.d/lvsdr start

[root@localhost ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:50:56:8D:19:13  
          inet addr:192.168.1.121  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe8d:1913/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:10420500 errors:0 dropped:0 overruns:0 frame:0
          TX packets:421628 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1046805128 (998.3 MiB)  TX bytes:101152496 (96.4 MiB)
eth0:0    Link encap:Ethernet  HWaddr 00:50:56:8D:19:13  
          inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:164717347 errors:0 dropped:0 overruns:0 frame:0
          TX packets:164717347 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:28297589130 (26.3 GiB)  TX bytes:28297589130 (26.3 GiB)
          

2,RS Configuration ( node1,node2,node3)
[root@oaserver1 ~]# vi /etc/init.d/realserver
#!/bin/sh
VIP=192.168.1.100
. /etc/rc.d/init.d/functions
case "$1" in
start)
  echo " start LVS  of RealServer"
  echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
  echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
  echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
  echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
  service network restart
  ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
  route add -host $VIP dev lo:0
  #end
;;
stop)
echo "close LVS Realserver"
service network restart
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac

[root@oaserver1 ~]# chmod +x /etc/init.d/realserver
[root@oaserver1 ~]# /etc/init.d/realserver start
[root@oaserver1 ~]# vi /etc/rc.local
/etc/init.d/realserver start

[root@oaserver1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:DC:B1:39  
          inet addr:192.168.1.105  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fedc:b139/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:816173 errors:0 dropped:0 overruns:0 frame:0
          TX packets:399007 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:534582215 (509.8 MiB)  TX bytes:98167814 (93.6 MiB)
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:43283 errors:0 dropped:0 overruns:0 frame:0
          TX packets:43283 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:8895319 (8.4 MiB)  TX bytes:8895319 (8.4 MiB)
lo:0      Link encap:Local Loopback  
          inet addr:192.168.1.100  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          
3,I restarted one of the servers for testing. logo Change it and refresh it if logo Change means success. Can use ipvsadm Name view.




5. Load balancing configuration (two lvs load balancing servers are highly available with keepalived, which is relatively simple, because this software can perform health checks and achieve high availability. If there is no keepalived, one oa hangs, and lvs will still be forwarded to it, resulting in inaccessibility of access)


Note that the server and IP environments in this chapter are different

1,Environmental Science
    Keepalived1 + lvs1(Director1): 10.22.10.83
    Keepalived2 + lvs2(Director2): 10.22.10.84
    Real server1: 10.22.10.80
    Real server1: 10.22.10.81
    Real server1: 10.22.10.82
    IP: 10.22.10.86

    
2,Dependency packages required for installation
yum install -y wget make kernel-devel gcc gcc-c++ libnl* libpopt* popt-static
//Create a soft link to prevent later compilation and installation of ipvsadm from failing to find the system kernel
ln -s /usr/src/kernels/2.6.32-696.30.1.el6.x86_64/ /usr/src/linux


3,Lvs + keepalived Two Nodes Installation
yum install ipvsadm keepalived -y
//You can also compile and install ipvsadm (not tried, not recommended)
wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz
tar zxvf ipvsadm-1.26.tar.gz
cd ipvsadm-1.26
make && make install
ipvsadm


4,Real server Node 3 configuration scripts ( node1,node2,node3): 
[root@oaserver1 ~]# vi /etc/init.d/realserver
#!/bin/sh
VIP=10.22.10.86
. /etc/rc.d/init.d/functions
case "$1" in
start)
  echo " start LVS  of RealServer"
  echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
  echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
  echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
  echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
  service network restart
  ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
  route add -host $VIP dev lo:0
  #end
;;
stop)
echo "close LVS Realserver"
service network restart
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
[root@oaserver1 ~]# chmod +x /etc/init.d/realserver
[root@oaserver1 ~]# /etc/init.d/realserver start
[root@oaserver1 ~]# vi /etc/rc.local
/etc/init.d/realserver start
[root@oaserver1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:07:D5:96  
          inet addr:10.22.10.80  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe07:d596/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1390 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1459 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:334419 (326.5 KiB)  TX bytes:537109 (524.5 KiB)
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2633 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2633 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:539131 (526.4 KiB)  TX bytes:539131 (526.4 KiB)
lo:0      Link encap:Local Loopback  
          inet addr:10.22.10.86  Mask:255.255.255.255
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          
          
5,Lvs + keepalived Node configuration, it seems, can be set up if there is a problem with that one, and then email notification.
//Master Node (MASTER) Profile
vi /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.22.10.86
    }
}
virtual_server 10.22.10.86 80 {
    delay_loop 6
    lb_algo sh
    lb_kind DR
    persistence_timeout 0
    protocol TCP
    real_server 10.22.10.80 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 10.22.10.81 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
    real_server 10.22.10.82 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 80
        }
    }
}
virtual_server 10.22.10.86 25 {
    delay_loop 6
    lb_algo sh
    lb_kind DR
    persistence_timeout 0
    protocol TCP
    real_server 10.22.10.80 25 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 25
        }
    }
    real_server 10.22.10.81 25 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 25
        }
    }
    real_server 10.22.10.82 25 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 25
        }
    }
}
virtual_server 10.22.10.86 110 {
    delay_loop 6
    lb_algo sh
    lb_kind DR
    persistence_timeout 0
    protocol TCP
    real_server 10.22.10.80 110 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 110
        }
    }
    real_server 10.22.10.81 110 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 110
        }
    }
    real_server 10.22.10.82 110 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 110
        }
    }
}
virtual_server 10.22.10.86 143 {
    delay_loop 6
    lb_algo sh
    lb_kind DR
    persistence_timeout 0
    protocol TCP
    real_server 10.22.10.80 143 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 143
        }
    }
    real_server 10.22.10.81 143 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 143
        }
    }
    real_server 10.22.10.82 143 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
            connect_port 143
        }
    }
}
//Slave node (BACKUP) configuration file
//Copy the master node's configuration file keepalived.conf, and then modify the following:
state MASTER -> state BACKUP
priority 100 -> priority 90
keepalived Two nodes execute the following command to turn on the forwarding function:
# echo 1 > /proc/sys/net/ipv4/ip_forward


6,Two Nodes Close Firewall
/etc/init.d/iptables stop
chkconfig iptables off


7,Two Nodes Start Sequentially keepalive
//Start keeping alive from master to slave respectively
service keepalived start
chkconfig keepalived on


8,test



VI. Maintenance Attentions

1. When troublesome start-up or maintenance, be sure to follow this start-up sequence (about 2-3 minutes interval):
(1) 85 NFS Server
 (2), 80 Business Servers
 (3), 81, 82 business servers
 (4) 83 Load Distribution Server
 (5) 84 Load Distribution Server
 Note: If 85 is not started first, the OA server will not be able to mount the storage partition. Then the 80oa server does not start first, and mysql services of 81 and 82 cannot start.
2. Normal shutdown or restart sequence
 (1) 83, 84 Load Distribution Servers
 (2) 81, 82 business servers
 (3) 80 Business Server
 (4) 85 NFS Server
 3. If three business servers turn off normally or reboot the card there for a long time, the trouble can be shut down directly temporarily.





--------------------------------------------------------------------------------
Attachment:

The scheduling algorithms of LVS are classified into static and dynamic ones.
1. Static algorithms (4 kinds): Schedule only according to the algorithm without considering the actual connection and load of the back-end server.
RR: Round Robin
 Scheduler distributes external requests to real servers in the cluster in turn by "round-call" scheduling algorithm. It treats each server equally regardless of the actual number of connections and system load on the server.
(2) WRR: Weight RR
 Scheduler schedules access requests according to the different processing capabilities of real servers through a "weighted round-call" scheduling algorithm. This ensures that servers with high processing power handle more traffic. The scheduler can automatically inquire about the load of the real server and dynamically adjust its weights.
DH: Destination Hash
 According to the target IP address of the request, the server is identified as a hash key from the static allocated hash table. If the server is available and not overloaded, the request is sent to the server, otherwise it will be empty.
SH: Source address hash
 The "source address hash" scheduling algorithm uses the source IP address of the request as a hash key to find the corresponding server from the static allocated hash table. If the server is available and not overloaded, it sends the request to the server, otherwise it returns to empty.
    
2. Dynamic Algorithms (6 kinds): The front-end scheduler allocates requests according to the actual connection of the back-end real server.
LC: Least Connections
 Scheduler dynamically schedules network requests to servers with the least number of links established by the "least connections" scheduling algorithm. If the real servers of cluster system have similar system performance, the "minimum connection" scheduling algorithm can better balance the load.
(2) WLC: Weighted Least Connections
 In the case of large differences in server performance in cluster system, the scheduler uses the "weighted least link" scheduling algorithm to optimize load balancing performance. Servers with higher weights will bear a larger proportion of active connection load. The scheduler can automatically inquire about the load of real servers and dynamically adjust their weights.
SED: Shortest Expected Delay
 Based on WLC, Overhead = ACTIVE+1) * 256 / weighted, no longer considering inactive state, the number of currently active states + 1 to achieve the smallest number, accept the next request, +1 is to consider the weighted time, inactive connections too many defects: when the authority is too large, will invert the idle server has been in a connectionless state. . 
NQ Never Queue Scheduling NQ
 There is no need for queues. If the number of connections of a realserver is equal to 0, it will be allocated directly to the past, and no sed operation is needed to ensure that there will not be a very large host space. On the basis of SED, no matter how many + several, the second time must be given to the next one, to ensure that no host will be idle, not consider inactive connection, only use NQ, SED should consider active state connection, UDP for DNS does not need to consider inactive connection, while httpd services in a holding state need to consider the pressure of inactive connection to the server.
LBLC: locality-Based Least Connections
 Locality-based Minimum Link Scheduling algorithm is a load balancing algorithm for the target IP address. It is mainly used in Cache Cluster System at present. The algorithm finds the server whose target IP address is most recently used according to the request's target IP address. If the server is available and not overloaded, it sends the request to the server; if the server does not exist, or the server is overloaded. If the server is half of the workload, then use the principle of "least links" to select an available server and send the request to the server.
LBLCR: Locality-Based Least Connections with Replication with Replication
 The "Local Minimum Link" scheduling algorithm with replication is also aimed at the load balancing of the target IP address. At present, it is mainly used in Cache Cluster System. It differs from LBLC algorithm in that it maintains the mapping from a target IP address to a group of servers, while LBLC algorithm maintains the mapping from a target IP address to a server. Identify the server group corresponding to the target IP address and select a server from the server group according to the principle of "minimum connection". If the server is not overloaded, send the request to the server. If the server is overloaded, select a server from the cluster according to the principle of "minimum connection", add the server to the server group and send the request to the service. At the same time, when the server group has not been modified for some time, the busiest servers are deleted from the server group to reduce the degree of replication.

3. Keeping alived to switch normal logs / var/log/message logs
 Here's another server hanging up. This server is currently taking over automatically.
Aug  4 15:15:47 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Transition to MASTER STATE
Aug  4 15:15:48 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Entering MASTER STATE
Aug  4 15:15:48 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) setting protocol VIPs.
Aug  4 15:15:48 localhost Keepalived_healthcheckers[1303]: Netlink reflector reports IP 192.168.1.100 added
Aug  4 15:15:48 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.100
Aug  4 15:15:53 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.100

The following is an automatic release of resources after another server is restored:
Aug  4 15:17:25 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Received higher prio advert
Aug  4 15:17:25 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Entering BACKUP STATE
Aug  4 15:17:25 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) removing protocol VIPs.
Aug  4 15:17:25 localhost Keepalived_healthcheckers[1303]: Netlink reflector reports IP 192.168.1.100 removed
Aug  4 15:17:34 localhost Keepalived_healthcheckers[1303]: TCP connection to [192.168.1.105]:110 success.
Aug  4 15:17:34 localhost Keepalived_healthcheckers[1303]: Adding service [192.168.1.105]:110 to VS [192.168.1.100]:110
Aug  4 15:17:35 localhost Keepalived_healthcheckers[1303]: TCP connection to [192.168.1.105]:143 success.
Aug  4 15:17:35 localhost Keepalived_healthcheckers[1303]: Adding service [192.168.1.105]:143 to VS [192.168.1.100]:143
Aug  4 15:18:04 localhost Keepalived_healthcheckers[1303]: TCP connection to [192.168.1.107]:110 success.
Aug  4 15:18:04 localhost Keepalived_healthcheckers[1303]: Adding service [192.168.1.107]:110 to VS [192.168.1.100]:110
Aug  4 15:18:05 localhost Keepalived_healthcheckers[1303]: TCP connection to [192.168.1.107]:143 success.
Aug  4 15:18:05 localhost Keepalived_healthcheckers[1303]: Adding service [192.168.1.107]:143 to VS [192.168.1.100]:143


Posted by dirkadirka on Sat, 26 Jan 2019 15:45:15 -0800