Redis trib building cluster
1. Turn on all nodes
Move redis trib to the specified directory:
cp /opt/module/redis-4.0.11/src/redis-trib.rb /usr/local/bin
Configure the configuration file of the 8000-8005 node as follows:
#redis basic configuration
port 8000
daemonize yes
pidfile /var/run/redis-8000.pid
dir /opt/module/redis-4.0.11/data
logfile "8000.log"
dbfilename "dump-8000.rdb"
#Cluster switch. Cluster mode is not enabled by default.
cluster-enabled yes
#The name of the cluster configuration file. Each node has a cluster related configuration file to persist the cluster information.
#This file does not need to be configured manually. This configuration file is generated and updated by Redis. Each Redis cluster node needs a separate configuration file,
#Make sure that the name of the configuration file does not conflict with the system the instance is running on
cluster-config-file nodes-8000.conf
#By default, all the slots in the cluster are in the charge of nodes. Only when the cluster status is ok can the service be provided. Set to no to provide services when the slot is not fully allocated.
#It is not recommended to open this configuration, which will cause the master of the small partition to accept the write request all the time, resulting in data inconsistency for a long time.
#There is a node on which the cluster can be used
cluster-require-full-coverage no
Please copy the configuration of the five.
Master-slave relationship:
Main: 8000 ----- > 8003 Main: 8001 ----- > 8004 Main: 8002 ----- > 8005 *After replicate is the primary node ID, which is queried through redis-cli-p 7000 cluster nodes.
Start 6 nodes:
redis-server redis-8000.conf
redis-server redis-8001.conf
redis-server redis-8002.conf
redis-server redis-8003.conf
redis-server redis-8004.conf
redis-server redis-8005.conf
To see if the process starts:
ps -ef | grep redis-server
Result:
[root@data2 config]# ps -ef |grep redis-server
root 3425 1 1 09:00 ? 00:00:00 redis-server *:8000 [cluster]
root 3427 1 1 09:00 ? 00:00:00 redis-server *:8001 [cluster]
root 3429 1 1 09:00 ? 00:00:00 redis-server *:8002 [cluster]
root 3431 1 1 09:00 ? 00:00:00 redis-server *:8003 [cluster]
root 3442 1 0 09:00 ? 00:00:00 redis-server *:8004 [cluster]
root 3450 1 0 09:00 ? 00:00:00 redis-server *:8005 [cluster]
root 3472 3220 0 09:01 pts/0 00:00:00 grep redis-server
2. Using redis trib
Enter the directory:
cd /opt/module/redis-4.0.11/src/
View command:
[root@data2 src]# ./redis-trib.rb Usage: redis-trib <command> <options> <arguments ...> rebalance host:port --auto-weights --threshold <arg> --simulate --weight <arg> --use-empty-masters --pipeline <arg> --timeout <arg> import host:port --from <arg> --replace --copy reshard host:port --slots <arg> --to <arg> --from <arg> --yes --pipeline <arg> --timeout <arg> set-timeout host:port milliseconds info host:port fix host:port --timeout <arg> create host1:port1 ... hostN:portN --replicas <arg> help (show this help) call host:port command arg arg .. arg del-node host:port node_id add-node new_host:new_port existing_host:existing_port --slave --master-id <arg> check host:port For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
Use the create command to create a cluster (– replicas 1 identifies a master node and creates a slave node):
./redis-trib.rb create --replicas 1 127.0.0.1:8000 127.0.0.1:8001 127.0.0.1:8002 127.0.0.1:8003 127.0.0.1:8004 127.0.0.1:8005
Execution result:
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:8000
127.0.0.1:8001
127.0.0.1:8002
Adding replica 127.0.0.1:8004 to 127.0.0.1:8000
Adding replica 127.0.0.1:8005 to 127.0.0.1:8001
Adding replica 127.0.0.1:8003 to 127.0.0.1:8002
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: a2d034b74c9cec4cd8398e8b23ae2d2ab124d49d 127.0.0.1:8000
slots:0-5460 (5461 slots) master
M: 1f0d813205ea17d9b254488215bd48612c0ff2b5 127.0.0.1:8001
slots:5461-10922 (5462 slots) master
M: f5960002d40cf3711eca7146719c4f568115f76c 127.0.0.1:8002
slots:10923-16383 (5461 slots) master
S: 9d1cb9fea95f1b205fc7a5ecb20125259765d6ca 127.0.0.1:8003
replicates a2d034b74c9cec4cd8398e8b23ae2d2ab124d49d
S: 93ae6d9a42bf485a955d2387b0a0d659e2c12902 127.0.0.1:8004
replicates 1f0d813205ea17d9b254488215bd48612c0ff2b5
S: fd17cbac4a5ca1942490d3734a73387341e80fb7 127.0.0.1:8005
replicates f5960002d40cf3711eca7146719c4f568115f76c
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 127.0.0.1:8000)
M: a2d034b74c9cec4cd8398e8b23ae2d2ab124d49d 127.0.0.1:8000
slots:0-5460 (5461 slots) master
1 additional replica(s)
M: f5960002d40cf3711eca7146719c4f568115f76c 127.0.0.1:8002
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: 1f0d813205ea17d9b254488215bd48612c0ff2b5 127.0.0.1:8001
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 93ae6d9a42bf485a955d2387b0a0d659e2c12902 127.0.0.1:8004
slots: (0 slots) slave
replicates 1f0d813205ea17d9b254488215bd48612c0ff2b5
S: fd17cbac4a5ca1942490d3734a73387341e80fb7 127.0.0.1:8005
slots: (0 slots) slave
replicates f5960002d40cf3711eca7146719c4f568115f76c
S: 9d1cb9fea95f1b205fc7a5ecb20125259765d6ca 127.0.0.1:8003
slots: (0 slots) slave
replicates a2d034b74c9cec4cd8398e8b23ae2d2ab124d49d
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.