ProxySQL cluster configuration

Keywords: Database MySQL SQL Oracle

ProxySQL cluster

Experimental environment information
MySQL master-slave information

IP 1 role
188.188.0.68 Node1
188.188.0.69 Node2
188.188.0.70 Node3

ProxySQL service node

IP 1 role
188.188.0.68 Node 68
188.188.0.71 Node 71

ProxySQL version: 2.0.3-29-g00f26d5

1, ProxySQL service installation

Install the same version of ProxySQL service on multiple servers; start the service on each node.
Please refer to the official website or \ < 01 "proxysql installation. TXT > for installation method

2, Set node information in the cluster

!!! Perform the following operations on each node respectively!!!

1) Login to ProxySQL point
Log in to each node and set the user name and password of Tongyi:

mysql -u admin -padmin -h 127.0.0.1 -P 6032 --prompt 'Admin> '

2) Set server information

Admin> SELECT * FROM proxysql_servers;
Empty set (0.00 sec)

Admin> INSERT INTO proxysql_servers(hostname,port,weight,comment) VALUES ('188.188.0.68',6032,1,'Node 68'),('188.188.0.71',6032,1,'Node 71');
Query OK, 2 rows affected (0.00 sec)

Admin> SELECT * FROM proxysql_servers;
+--------------+------+--------+----------+
| hostname     | port | weight | comment  |
+--------------+------+--------+----------+
| 188.188.0.68 | 6032 | 1      | Node 68   |
| 188.188.0.71 | 6032 | 1      | Node 71   |
+--------------+------+--------+----------+
2 rows in set (0.00 sec)

3) Configuration takes effect

Admin> LOAD PROXYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)

Admin> SELECT * FROM runtime_proxysql_servers;
+--------------+------+--------+----------+
| hostname     | port | weight | comment  |
+--------------+------+--------+----------+
| 188.188.0.71 | 6032 | 1      | Node 71   |
| 188.188.0.68 | 6032 | 1      | Node 68   |
+--------------+------+--------+----------+
2 rows in set (0.00 sec)

3, Set data synchronization users between cluster nodes

!!! Perform the following operations on each node respectively!!!

1) View original values

Admin> SHOW VARIABLES LIKE 'admin-cluster_username';
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| admin-cluster_username |       |
+------------------------+-------+
1 row in set (0.00 sec)

Admin> SHOW VARIABLES LIKE 'admin-cluster_password';
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| admin-cluster_password |       |
+------------------------+-------+
1 row in set (0.00 sec)

Admin> SHOW VARIABLES LIKE 'admin_admin_credentials';
+-------------------------+-------------+
| Variable_name           | Value       |
+-------------------------+-------------+
| admin-admin_credentials | admin:admin |
+-------------------------+-------------+
1 row in set (0.00 sec)

Note that the username/password used for synchronization should also appear in admin admin? Credentials, otherwise the connection will fail.

2) Specify a specific value

Admin> SET admin-cluster_username = 'cluser';
Query OK, 1 row affected (0.00 sec)

Admin> SET admin-cluster_password = '654987';
Query OK, 1 row affected (0.00 sec)

Admin> UPDATE global_variables SET variable_value='admin:admin;cluser:654987' WHERE variable_name='admin-admin_credentials';
Query OK, 1 row affected (0.01 sec)

Note: separate users with semicolons (;). The account doesn't need to be established in advance, just use it directly.

3) View results

Admin> SHOW VARIABLES LIKE 'admin-cluster_username';
+------------------------+--------+
| Variable_name          | Value  |
+------------------------+--------+
| admin-cluster_username | cluser |
+------------------------+--------+
1 row in set (0.00 sec)

Admin> SHOW VARIABLES LIKE 'admin-cluster_password';
+------------------------+--------+
| Variable_name          | Value  |
+------------------------+--------+
| admin-cluster_password | 654987 |
+------------------------+--------+
1 row in set (0.00 sec)

Admin> SHOW VARIABLES LIKE 'admin_admin_credentials' ;
+-------------------------+---------------------------+
| Variable_name           | Value                     |
+-------------------------+---------------------------+
| admin-admin_credentials | admin:admin;cluser:654987 |
+-------------------------+---------------------------+
1 row in set (0.00 sec)

4) Make configuration effective

Admin> LOAD ADMIN VARIABLES TO RUNTIME ;
Query OK, 0 rows affected (0.00 sec)

4, View the cluster status at each node

!!! Perform the following operations on each node respectively!!!

1) View ProxySQL cluster node checksums

Admin> SELECT * FROM stats.stats_proxysql_servers_checksums ;
+--------------+------+-------------------+---------+------------+--------------------+------------+------------+------------+
| hostname     | port | name              | version | epoch      | checksum           | changed_at | updated_at | diff_check |
+--------------+------+-------------------+---------+------------+--------------------+------------+------------+------------+
| 188.188.0.71 | 6032 | admin_variables   | 0       | 0          |                    | 0          | 1558360976 | 0          |
| 188.188.0.71 | 6032 | mysql_query_rules | 9       | 1558251408 | 0x0000000000000000 | 1558360877 | 1558360976 | 0          |
| 188.188.0.71 | 6032 | mysql_servers     | 671     | 1558360886 | 0xA98C8BA3A59F15B6 | 1558360877 | 1558360976 | 100        |
| 188.188.0.71 | 6032 | mysql_users       | 8       | 1558251284 | 0xFE7A30305853A000 | 1558360877 | 1558360976 | 0          |
| 188.188.0.71 | 6032 | mysql_variables   | 0       | 0          |                    | 0          | 1558360976 | 0          |
| 188.188.0.71 | 6032 | proxysql_servers  | 2       | 1558359750 | 0x93A30F7C7F6EE496 | 1558360877 | 1558360976 | 0          |
| 188.188.0.68 | 6032 | admin_variables   | 0       | 0          |                    | 0          | 1558360975 | 0          |
| 188.188.0.68 | 6032 | mysql_query_rules | 1       | 1558357984 | 0x0000000000000000 | 1558359728 | 1558360975 | 0          |
| 188.188.0.68 | 6032 | mysql_servers     | 3       | 1558360886 | 0x624AF2D0247C16D3 | 1558360880 | 1558360975 | 0          |
| 188.188.0.68 | 6032 | mysql_users       | 2       | 1558360880 | 0xFE7A30305853A000 | 1558360880 | 1558360975 | 0          |
| 188.188.0.68 | 6032 | mysql_variables   | 0       | 0          |                    | 0          | 1558360975 | 0          |
| 188.188.0.68 | 6032 | proxysql_servers  | 2       | 1558359728 | 0x93A30F7C7F6EE496 | 1558359728 | 1558360975 | 0          |
+--------------+------+-------------------+---------+------------+--------------------+------------+------------+------------+
12 rows in set (0.00 sec)

be careful:
A) If the records in the table are all 0, it is likely that synchronization failed. You need to check the error log to determine the specific problem.
B) The parameter configuration of admin variables and MySQL variables will not be synchronized, which needs to be set on each node manually;

2) View the global status information synchronization between ProxySQL cluster nodes

Admin> SELECT * FROM stats.stats_proxysql_servers_metrics ;
+--------------+------+--------+----------+------------------+----------+---------------+---------+------------------------------+----------------------------+
| hostname     | port | weight | comment  | response_time_ms | Uptime_s | last_check_ms | Queries | Client_Connections_connected | Client_Connections_created |
+--------------+------+--------+----------+------------------+----------+---------------+---------+------------------------------+----------------------------+
| 188.188.0.71 | 6032 | 1      | Node 71   | 3                | 641614   | 4021          | 172     | 0                            | 87                         |
| 188.188.0.68 | 6032 | 1      | Node 68   | 3                | 3023     | 2442          | 0       | 0                            | 0                          |
+--------------+------+--------+----------+------------------+----------+---------------+---------+------------------------------+----------------------------+
2 rows in set (0.00 sec)

Here, the basic cluster setting has been completed, and the parameters of the cluster can be optimized based on it again; and the back-end MySQL server can be used!!!

  • >>>>Pressure test<<<<

    1, The back end is MGR multi master mode

    Multiple main pressure tests can be conducted for MGR, but attention shall be paid to:
    Because ProxySQL monitors the master-slave status by establishing a status view of sys.gr'member'routing'candidate'status on the back-end mysql, the query of this view is actually the summary of query results of multiple system tables. Therefore, accessing this view when MySQL is busy will cause the monitor to get information timeout and fail, and then misjudge the host state and modify the state of runtime ﹣ MySQL ﹣ servers, resulting in the disconnection of the connection back-end MGR thread. Therefore, it is better to replace this view with a table with the same name and write the server status in it, so that the timeout exception of getting content during the access of the monitor program will not be obtained, and the back-end connection thread will not be killed.

For Proxy SQL, if MGR multi master mode is used in the backend, you can refer to the following ideas:
Due to the limitation and deficiency of MGR in the support of large transactions and the detection of transaction conflicts, it is impossible to test MGR directly in parallel. Oracle's official multi primary mode test is to test different test libraries on each node, that is to say, it can avoid the problem that the tools cannot be tested in parallel, and at the same time, it can reduce the possibility of conflict. Remember, multi primary mode must avoid the scene of hot data conflict.

For example, there are three nodes in the MGR cluster, A, B and C respectively. Then, at least three libraries need to be built during the pressure test. In this way, each tpcc pressure test uses one thread to test one library and starts multiple tpcc to test at the same time.
You can test this:
Node A:

./tpcc_start -h188.188.0.68 -P3306 -uroot -p '123456' -d tpcc1 -w 5 -c 1 -r 10 -l 30 -i 10 -f tpcc_mysql.log -t tpcc_mysql.rtx
.....

Node B:

./tpcc_start -h188.188.0.69 -P3306 -uroot -p '123456' -d tpcc2 -w 5 -c 1 -r 10 -l 30 -i 10 -f tpcc_mysql.log -t tpcc_mysql.rtx
.....
C Node:
./tpcc_start -h188.188.0.70 -P3306 -uroot -p '123456' -d tpcc3 -w 5 -c 1 -r 10 -l 30 -i 10 -f tpcc_mysql.log -t tpcc_mysql.rtx
.....

However, the multi primary mode compression test will not have a good result, because the hot spots are too centralized, which will lead to many submission failures, and may lead to performance degradation.

If sysbench is used for pressure testing, the same idea can only be used in the way of multi library + single thread + multi test terminal.

2, The back end is MGR single master mode

You can directly use multithreading to press test PROXYSQL, just like you can press test ordinary MySQL. Note: for the sys.gr? Member? Routing? Candidate? Status view established on the back-end MySQL, please refer to the above description of multiple primary pressure tests.

~
~
complete!

Posted by rbarnett on Wed, 29 Apr 2020 03:26:43 -0700