MHA introduction
(1) Introduction
At present, MySQL is a relatively mature solution in terms of high availability. It is developed by Japanese DeNA company youshimaton (now working for Facebook company), and it is an excellent high availability software for failover and master-slave improvement in MySQL high availability environment. In the process of MySQL failover, MHA can automatically complete the database failover operation within 0-30 seconds, and in the process of failover, MHA can guarantee the consistency of data to the greatest extent, so as to achieve real high availability.
(2) The software consists of two parts
MHA Manager and MHA Node. MHA Manager can be deployed on a single machine to manage multiple master slave clusters, or on a slave node. The MHA Node runs on each MySQL server, and the MHA Manager will regularly detect the master node in the cluster. When the master fails, it can automatically promote the slave of the latest data to the new master, and then point all other slave to the new master again. The entire failover process is completely transparent to the application.
(3) How it works
1. In the process of MHA automatic failover, MHA attempts to save binary logs from the down master server to ensure the data is not lost to the greatest extent, but this is not always feasible. For example, if the primary server hardware fails or cannot be accessed through ssh, MHA cannot save binary logs, only fails over and loses the latest data. Using MySQL 5.5 semi synchronous replication can greatly reduce the risk of data loss. MHA can be combined with semi synchronous replication. If only one slave has received the latest binary log, MHA can apply the latest binary log to all other slave servers, so it can ensure the data consistency of all nodes.
2. Sequence:
① Save binary log events from the crashed master;
② Identify the slave with the latest update;
③ Apply different relay log s to other slave s;
④ Apply the binary log events saved from the master;
⑤ Upgrade a slave to a new master;
⑥ Connect other slave to the new master for replication
Experimental environment
role | Package installation required |
---|---|
master(192.168.13.129) | mha4mysql-node |
slave1(192.168.13.130) | mha4mysql-node |
slave2(192.168.13.131) | mha4mysql-node |
manager(192.168.13.128) | mha4mysql-manager, mha4mysql-node |
(1) Requirements:
In this case, it is required to monitor MySQL database through MHA to switch automatically in case of failure, without affecting the business.
(2) Ideas:
Install MySQL database Configure MySQL one master and two slaves Install MHA software Configure password free authentication Configure MySQL and MHA high availability Simulate master failover
(3) Operating system:
Cent0S7.6, MHA 0.57
1. Install mysql database on three primary and secondary servers (the same operation, only one is demonstrated)
#Install build dependent environment [root@master ~] yum -y install gcc gcc-c++ ncurses ncurses-devel bison perl-Module-Install cmake [root@master ~]# mount.cifs //192.168.100.100/tools /mnt/tools / ාාmount Toolkit Password for root@//192.168.100.100/tools: [root@master ~]# cd /mnt/tools/MHA/ [root@master MHA]# tar xf cmake-2.8.6.tar.gz -C /opt/ ##decompression [root@master mnt] cd /opt/cmake-2.8.6/ [root@master MHA]# cd /opt/cmake-2.8.6/ ##To configure [root@master cmake-2.8.6] gmake && gmake install ##Compilation and installation #Install mysql database [root@master cmake-2.8.6]# cd /mnt/tools/MHA/ [root@master MHA]# tar xf mysql-5.6.36.tar.gz -C /opt/t ##Decompression MySQL #Compile mysql [root@master MHA]# cd /opt/mysql-5.6.36/ [root@master mysql-5.6.36]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \ -DDEFAULT_CHARSET=utf8 \ ##Specify character set -DDEFAULT_COLLATION=utf8_general_ci \ ##Specify character set defaults -DWITH_EXTRA_CHARSETS=all \ ##Associate all additional character sets -DSYSCONFDIR=/etc ##Profile directory #install [root@master mysql-5.6.36]# make && make install ##Compilation and installation #Setting environment variables [root@master mysql-5.6.36]# cp support-files/my-default.cnf /etc/my.cnf ##Copy profile [root@master mysql-5.6.36]# cp support-files/mysql.server /etc/rc.d/init.d/mysqld ##Copy startup script [root@master mysql-5.6.36]# chmod +x /etc/rc.d/init.d/mysqld ##Give execution permission [root@master mysql-5.6.36]# chkconfig --add mysqld ##Add to service management [root@master mysql-5.6.36]# echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile ##Modify environment variables [root@master mysql-5.6.36]# source /etc/profile ##Refresh Huaning variable #Create mysql database and authorize [root@master mysql-5.6.36]# groupadd mysql ##Create group [root@master mysql-5.6.36]# useradd -M -s /sbin/nologin mysql -g mysql ##Create system user [root@master mysql-5.6.36]# chown -R mysql.mysql /usr/local/mysql ##Modify the owner [root@master mysql-5.6.36]# mkdir -p /data/mysql ##Create data directory #Initialize database [root@master mysql-5.6.36]# /usr/local/mysql/scripts/mysql_install_db \ --basedir=/usr/local/mysql \ ##File directory --datadir=/usr/local/mysql/data \ ##Data directory --user=mysql ##user
2. Modify the main configuration file of mysql / etc/my.cnf. Note that the server IDs of the three servers cannot be the same
##To configure the primary server: [root@master mysql-5.6.36]# vim /etc/my.cnf [mysql] server-id = 1 #Open binary log log_bin = master-bin #Allow synchronization from server log-slave-updates = true ##Configure slave 1: [root@slave1 mysql-5.6.36]# vim /etc/my.cnf [mysql] server-id = 2 #Open binary log log_bin = master-bin #Use relay logs for synchronization relay-log = relay-log-bin relay-log-index = slave-relay-bin.index ##Configure slave 2: [root@slave2 mysql-5.6.36]# vim /etc/my.cnf [mysql] server-id = 3 log_bin = master-bin relay-log = relay-log-bin relay-log-index = slave-relay-bin.index
3. Three servers start mysql service
#Create these two soft links on three servers [root@master mysql-5.6.36]# ln -s /usr/local/mysql/bin/mysql /usr/sbin/ [root@master mysql-5.6.36]# ln -s /usr/local/mysql/bin/mysqlbinlog /usr/sbin/ #mysql start [root@master mysql-5.6.36]# /usr/local/mysql/bin/mysqld_safe --user=mysql & #Turn off firewall and security features [root@master mysql-5.6.36]# systemctl stop firewalld.service [root@master mysql-5.6.36]# setenforce 0
4, Configure Mysql master slave synchronization (one master and two slaves) to authorize two users on all database nodes
[root@master mysql-5.6.36]# mysql -u root -p / / enter the database mysql> grant replication slave on *.* to 'myslave'@'192.168.52.%' identified by '123'; ##Synchronize user myslave from database mysql> grant all privileges on *.* to 'mha'@'192.168.52.%' identified by 'manager'; ##manager uses to monitor users mysql> flush privileges; //Refresh database #Add the following authorization (not required in theory) to the database as hostname authorization (in the form of hostname when MHA checks) mysql> grant all privileges on *.* to 'mha'@'master' identified by 'manager'; mysql> grant all privileges on *.* to 'mha'@'slave1' identified by 'manager'; mysql> grant all privileges on *.* to 'mha'@'slave2' identified by 'manager';
5. Viewing binaries and synchronization points on the master server
mysql> show master status; +-------------------+----------+--------------+------------------+-------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set | +-------------------+----------+--------------+------------------+-------------------+ | master-bin.000001 | 1213 | | | | +-------------------+----------+--------------+------------------+-------------------+ 1 row in set (0.00 sec)
6. Set up synchronization on two slave servers
#Execute the following commands on both slave servers to synchronize the logs of the primary server mysql> change master to master_host='192.168.52.129',master_user='myslave',master_password='123',master_log_file='master-bin.000001',master_log_pos=1213; mysql> start slave; //Open slave mysql> show slave status\G; //View slave Slave_IO_Running: Yes Slave_SQL_Running: Yes mysql> set global read_only=1; mysql> flush privileges; //Refresh database
7. Install the MHA lazy environment on all servers. First install the epel source
[root@master mysql-5.6.36]# yum install epel-release --nogpgcheck -y ##Installation source [root@master mysql-5.6.36]# yum install -y perl-DBD-MySQL \ ##Aiming at MySQL perl-Config-Tiny \ ##configuration file perl-Log-Dispatch \ ##Journal perl-Parallel-ForkManager \ ##Multithreading management perl-ExtUtils-CBuilder \ ##Extension tool perl-ExtUtils-MakeMaker \ perl-CPAN ##Program library
8. Install node s on all servers
#Unzip and install node [root@manager ~]# cd ~ [root@manager ~]# tar zxvf /mnt/mha4mysql-node-0.57.tar.gz [root@manager ~]# cd mha4mysql-node-0.57/ [root@manager mha4mysql-node-0.57]# perl Makefile.PL ##perl to compile [root@manager mha4mysql-node-0.57]# make && make install ##Compilation and installation
9. Install manager on manger server (Note: you must install node component to install manager component)
#Turn off firewall [root@manager ~]# systemctl stop firewalld.service [root@manager ~]# setenforce 0 #Unzip and install manager [root@manager ~]# cd ~ [root@manager ~]# tar zxvf /mnt/mha4mysql-manager-0.57.tar.gz [root@manager ~]# cd mha4mysql-manager-0.57/ [root@manager mha4mysql-manager-0.57]# perl Makefile.PL ##perl compilation [root@manager mha4mysql-manager-0.57]# make && make install ##Compilation and installation manager After the server is installed usr/local/bin Several tools will be generated under the directory: - masterha_check_repl inspect mysql Replication status - masterha_master_monitor inspect master Is it down? - masterha_check_ssh inspect MHA Of SSH Configuration situation - masterha_master_switch Control failover - masterha_check_status Check current MHA running state - masterha_conf_host Add or remove configured server information - masterha_stop Close manager - masterha_manager start-up manager Script node After installation/usr/local/bin The following scripts are generated (usually by MHA Manager No manual operation is required) - apply_diff_relay_logs : Identify differentiated relay log events and apply them to other slave; - save_binary_logs: Save and copy master Binary log of; - filter_mysqlbinlog : Remove unnecessary ROLLBACK Event ( MHA The tool is no longer used); - purge_relay_logs: Clear relay log (no blocking SQL Thread);
10. Configure password free access
##Configure password free authentication of all database nodes in manager [root@manager ~]# ssh-keygen -t rsa ##Generate secret key Enter file in which to save the key (/root/.ssh/id_rsa): ##Enter Enter passphrase (empty for no passphrase): ##Enter Enter same passphrase again: ##Enter [root@manager ~]# ssh-copy-id 192.168.52.129 ##Upload to other servers Are you sure you want to continue connecting (yes/no)? yes root@192.168.52.129's password: ##Enter password for 129 server [root@manager ~]# ssh-copy-id 192.168.52.130 [root@manager ~]# ssh-copy-id 192.168.52.131 ##Configure password free authentication to database nodes slave1 and slave2 on master [root@master ~]# ssh-keygen -t rsa [root@master ~]# ssh-copy-id 192.168.52.130 [root@master ~]# ssh-copy-id 192.168.52.131 ##Password free authentication configured to database nodes master 'and slave 2 on slave 1 [root@slave1 ~]# ssh-keygen -t rsa [root@slave1 ~]# ssh-copy-id 192.168.52.129 [root@slave1 ~]# ssh-copy-id 192.168.52.131 ##Configure password free authentication to database nodes slave1 and master on slave2 [root@slave2 ~]# ssh-keygen -t rsa [root@slave2 ~]# ssh-copy-id 192.168.52.129 [root@slave2 ~]# ssh-copy-id 192.168.52.130
11. Configure MHA, copy related scripts to / usr/local directory on manager node, and configure
[root@manager ~]# cp -ra /root/mha4mysql-manager-0.57/samples/scripts/ /usr/local/bin/ ##Copy script to / usr/local [root@manager ~]# ls mha4mysql-manager-0.57/samples/scripts/ ##Generate four executable scripts master_ip_failover: When switching automatically VIP Managed scripts; master_ip_online_change: When switching Online VIP Management; power_manager: Script to shut down the host after failure; send_report: Send alarm script after failover; ##Copy the script of VIP management during automatic switch to the directory / usr/local/bin / [root@manager ~]# cp /usr/local/bin/scripts/master_ip_failover /usr/local/bin/ [root@manager ~]# vim /usr/local/bin/master_ip_failover ##Delete all content and rewrite the master IP failover script #!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '192.168.52.100'; my $brdc = '192.168.52.255'; my $ifdev = 'ens33'; my $key = '1'; my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip"; my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down"; my $exit_code = 0; #my $ssh_start_vip = "/usr/sbin/ip addr add $vip/24 brd $brdc dev $ifdev label $ifdev:$key;/usr/sbin/arping -q -A -c 1 -I $ifdev $vip;iptables -F;"; #my $ssh_stop_vip = "/usr/sbin/ip addr del $vip/24 dev $ifdev label $ifdev:$key"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; exit 0; } else { &usage(); exit 1; } } sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } # A simple system call that disable the VIP on the old_master sub stop_vip() { `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
12. Create the MHA software directory on the manager node and copy the configuration file
[root@manager ~]# mkdir /etc/masterha [root@manager ~]# cp /root/mha4mysql-manager-0.57/samples/conf/app1.cnf /etc/masterha/ #Edit profile [root@manager ~]# vim /etc/masterha/app1.cnf [server default] #manager Profile manager_log=/var/log/masterha/app1/manager.log #manager log manager_workdir=/var/log/masterha/app1 #The location where the master saves the binlog. The path here should be the same as that of the bilog configured in the master master_binlog_dir=/usr/local/mysql/data #Set the switch script for automatic failover. That's the script above master_ip_failover_script=/usr/local/bin/master_ip_failover #Set the switch script for manual switch master_ip_online_change_script=/usr/local/bin/master_ip_online_change #This password is the one used to create the monitoring user password=manager remote_workdir=/tmp #Set copy user password repl_password=123 #Set users for replication users repl_user=myslave #Set the script of alarm after switching secondary_check_script=/usr/local/bin/masterha_secondary_check -s 192.168.52.130 -s 192.168.52.131 #Set failure shutdown failure script host shutdown_script="" #Set the login user name of ssh ssh_user=root #Set monitoring user user=mha [server1] hostname=192.168.52.129 port=3306 [server2] candidate_master=1 #Set as candidate master. If this parameter is set, the sender will upgrade from the slave to the master after the master-slave switch hostname=192.168.52.130 check_repl_delay=0 port=3306 [server3] hostname=192.168.52.131 port=3306
13. Test ssh without password authentication. If it is normal, it will output successful and check the health status
[root@manager ~]# masterha_check_ssh -conf=/etc/masterha/app1.cnf ........//Ellipsis content [root@manager ~]# masterha_check_repl -conf=/etc/masterha/app1.cnf
14. Configure virtual ip on the master
[root@master mha4mysql-node-0.57]# /sbin/ifconfig ens33:1 192.168.52.100/24
15. Start mha on the manager server
[root@manager scripts]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 & ##Viewing the MHA status, you can see that the current master is a mysql node [root@manager scripts]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:43036) is running(0:PING_OK), master:192.168.52.129
16. Fault simulation
[root@manager scripts]# tailf /var/log/masterha/app1/manager.log ##Start monitoring observation logging ##Turn off the master server [root@master mha4mysql-node-0.57]# pkill -9 mysql //You can see the status of the slave database. vip will switch to one of the slave databases: [root@slave1 mha4mysql-node-0.57]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.52.130 netmask 255.255.255.0 broadcast 192.168.52.255 ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.52.100 netmask 255.255.255.0 broadcast 192.168.52.255 ether 00:0c:29:af:94:06 txqueuelen 1000 (Ethernet) //At this time, install mysql on the manager, and the client can also connect to the database through the virtual ip: ##Lifting rights on vip's database server mysql> grant all on *.* to 'root'@'%' identified by 'abc123'; Query OK, 0 rows affected (0.00 sec) ##Log in with virtual ip on the client [root@manager ~]# mysql -uroot -h 192.168.52.100 -p ##Specify virtual ip Enter password: ##Input password MySQL [(none)]>