Running XtraDB Cluster in Multi-Host Docker Network

Keywords: Linux Docker network curl MySQL

Translator's Preface

XtraDB maintained by Percona is a branch of mysql. It uses xtrodb driver with better performance than innodb. XtraDB-Cluster product is its clustering scheme. Please google the content of the scheme. The recent launch of XtraDB-Cluster version 5.7 has kept pace with the main branch of mysql, attracting more MySQL enthusiasts to move to Percona.

text

Next, I'll explain how to run Percona XtraDB Cluster in a multi-host Docker network.

With us Pecrona XtraDB Cluster 5.7 beta We also decided to provide the release version. Pecrona XtraDB Cluster 5.6 and Pecrona XtraDB Cluster 5.7 Mirror file.

Starting a single Percona XtraDB Cluster is convenient, basic and Mirror Image of Percona Server The same. The only difference is that you need to specify the CLUSTER_NAME environment variable. The command to start the container is as follows

docker run -d -p 3306:3306
    -e MYSQL_ROOT_PASSWORD= Theistareyk
    -e CLUSTER_NAME= Theistareykjarbunga
    -e XTRABACKUP_PASSWORD=Theistare
    percona/percona-xtradb-cluster

You should notice that we have provided an optional parameter XTRABACKUP_PASSWORD, which is the user password for xtradbbackup@localhost to perform xtrabackup-SST synchronization.

Running a single point of Percona XtraDB Cluster needs to cater to the CLUSTER_NAME parameters required for Cluster startup, and a single point doesn't really matter. In the mirror we provide, the following tasks need to be addressed:

1. Running in multi-host environments (Docker Swarm and Kubernetes are common in multi-host environments)

2. Start multiple nodes in a Cluster, as we need

3. Register all nodes on the service port of service discovery, so that all clients can know how many nodes are there and how they are running.

4. Integrating Proxy SQL

Let's look at it one by one.

With the improvement of Docker network protocol, Percona XtraDB can be deployed in a multi-host environment. The latest version of Docker brings network overlay drivers, which we will use to build a virtual network. Installing and launching Docker's overlay network is beyond the scope of this question. Here's a link for those interested to see this. Very good introductory material Learn how this virtual network works.

Well, when your overlay network driver is installed, we'll create a virtual network on it.

docker network create -d overlay cluster1_net

Then we can start the container like this:

docker run -d -p 3306 --net=cluster1_net
 -e MYSQL_ROOT_PASSWORD=Theistareyk
 -e CLUSTER_NAME=cluster1
 ...
 -e XTRABACKUP_PASSWORD=Theistare
 percona/percona-xtradb-cluster

cool is that you start this node on any server, and they will communicate automatically as long as they are the same CLUSTER_NAME based on the same network.

If you are in a single Docker host environment, such as doing a test or something, you can also create a bridge network and use it in a single host environment.

Well, the above script, how to say, can basically be executed. The problem is that each new node needs to know the address of the running cluster.

To let the instance know this address, we can use the variable CLUSTER_JOIN, whose value is the ip address of a running node (if it is a new group, then it is empty).

In this case, the script should look like this:

docker run -d -p 3306 --net=cluster1_net
 -e MYSQL_ROOT_PASSWORD=Theistareyk
 -e CLUSTER_NAME=cluster1
 -e CLUSTER_JOIN=10.0.5.5
 -e XTRABACKUP_PASSWORD=Theistare
 percona/percona-xtradb-cluster

Manually tracking an ip address seems to me to be an extra task, especially when it comes to starting and stopping a node in a dynamic environment. So we decided to use a discovery service. Now we are using Etcd discovery services, of course, there is no problem using other discovery services, such as Consul.

For example, when you run Discovery Services on the host 10.20.2.4:2379, you can start the node as follows:

docker run -d -p 3306 --net=cluster1_net
 -e MYSQL_ROOT_PASSWORD=Theistareyk
 -e CLUSTER_NAME=cluster1
 -e DISCOVERY_SERVICE=10.20.2.4:2379
 -e XTRABACKUP_PASSWORD=Theistare
 percona/percona-xtradb-cluster

This node will register itself in the discovery service and join the cluster named $CLUSTER_NAME.

Here is a simple way to show that CLUSTER_NAME is a $CLUSTER_NAME cluster:

curl http://$ETCD_HOST/v2/keys/pxc-cluster/$CLUSTER_NAME/?recursive=true | jq
{
  "action": "get",
  "node": {
    "key": "/pxc-cluster/cluster4",
    "dir": true,
    "nodes": [
      {
        "key": "/pxc-cluster/cluster4/10.0.5.2",
        "dir": true,
        "nodes": [
          {
            "key": "/pxc-cluster/cluster4/10.0.5.2/ipaddr",
            "value": "10.0.5.2",
            "modifiedIndex": 19600,
            "createdIndex": 19600
          },
          {
            "key": "/pxc-cluster/cluster4/10.0.5.2/hostname",
            "value": "2af0a75ce0cb",
            "modifiedIndex": 19601,
            "createdIndex": 19601
          }
        ],
        "modifiedIndex": 19600,
        "createdIndex": 19600
      },
      {
        "key": "/pxc-cluster/cluster4/10.0.5.3",
        "dir": true,
        "nodes": [
          {
            "key": "/pxc-cluster/cluster4/10.0.5.3/ipaddr",
            "value": "10.0.5.3",
            "modifiedIndex": 26420,
            "createdIndex": 26420
          },
          {
            "key": "/pxc-cluster/cluster4/10.0.5.3/hostname",
            "value": "cfb29833f1d6",
            "modifiedIndex": 26421,
            "createdIndex": 26421
          }
        ],
        "modifiedIndex": 26420,
        "createdIndex": 26420
      }
    ],
    "modifiedIndex": 19600,
    "createdIndex": 19600
  }
}

With this method, you can start any number of database nodes on any Docker host. Now we can put SQL Proxy on the front end of the database cluster, which will be discussed next time.

Translator attached

When I implemented the XtraDB Cluster, I found that there was a problem with the script. At the same time, some people found the same problem at the back of the article. Now I discount the bug and give the solution.

The speaker is Roma Cherepanov.

When he started the node, he found some error s, and I also found these problems, which caused the container to not start up all the time. After debugging, he solved the problem and gave a reply under the delegation.

kevin:

I had the same problem running this mirror.

There are some problems with the pxc-entry.sh script

line 125: (Should be 125, the previous number is wrong, the post is wrong too)

i=$(curl http://$DISCOVERY_SERVICE/v2/keys/pxc-cluster/queue/$CLUSTER_NAME | jq -r '.node.nodes[].value')

Should be

i=(curl http://$DISCOVERY_SERVICE/v2/keys/pxc-cluster/queue/$CLUSTER_NAME | jq -r '.node.nodes[].value')

line 139:

i=$(curl http://$DISCOVERY_SERVICE/v2/keys/pxc-cluster/$CLUSTER_NAME/?quorum=true | jq -r '.node.nodes[]?.key' | awk -F'/' '{print $(NF)}')

Should be

i=(curl http://$DISCOVERY_SERVICE/v2/keys/pxc-cluster/$CLUSTER_NAME/?quorum=true | jq -r '.node.nodes[]?.key' | awk -F'/' '{print $(NF)}')

If a script assigns an array to i, then the shell's array assignment should be in the form of i=(a b c d), not i=$(a b c d) without knowing what the author's script interpretation language is. Anyway, it should be like this here. OK after the change!

Author information
Author: Vadim Tkachenko
Links to the original text: https://www.percona.com/blog/...
Translated from Maxleap Team_Service & Infra: Kevin
Initial address: https://blog.maxleap.cn/archi...

Author's previous masterpiece
Rapid Deployment of Test-Driven Development/Debug Environment
The thing between Amazon and Mysql

Welcome to the Wechat Public Number: MaxLeap_yidongyanfa

Posted by B0b on Mon, 08 Apr 2019 16:24:31 -0700