RabbitMQ Metadata
- RabbitMQ Metadata Type
RabbitMQ always records the following types of internal metadata: *Queue Metadata - Queue names and their properties (whether persistent or automatically deleted) *Exchange metadata - Exchange name, type, properties (persistent, etc.) *Binding metadata - A simple table showing how to route messages to a queue * vhost metadata - Provides namespace and security attributes for queues, switches, and bindings within vhost *User and user rights information metadata
- Single node metadata storage
RabbitMQ stores all this information in memory, and synchronization stores queues and switches marked as persistent on the hard disk to ensure that they restart the service Time to rebuild
- Cluster Environment Meta Storage
Cluster environments introduce new metadata: the location of cluster nodes, and the relationship between nodes and other types of metadata that have been recorded. Clusters also provide an option to store metadata on disk (stand-alone sections) Default configuration of points) or in RAM
Any node running in a cluster has all the metadata of the cluster
When queues, switches, and bindings are declared in a cluster, these operations need to wait until all cluster nodes have successfully submitted metadata changes before returning
Memory and disk nodes
Nodes in RabbitMQ are divided into two categories by storage: *Memory Node: All metadata definitions are stored in memory only (excluding message content, message index, queue index, other node state), queues, switches, bindings that change frequently In this case, using memory nodes can improve performance *Disk Node: All metadata is stored on disk
Be careful:
-
Single-node systems allow only disk nodes, and cluster environments can have two types of nodes at the same time
-
RabbitMQ requires a minimum of one disk node in the cluster
-
Nodes joining or leaving must be notified to at least one disk node, and all disk nodes must be online
-
Clusters no longer allow modification of any metadata if all disk nodes crash
-
When the memory node starts, it connects to the disk node to download the cluster metadata. As long as the memory node can connect to a disk node, it can join the cluster
RabbitMQ cluster deployment
Note: Clustering is meant to be used across LAN. It is not recommended to run clusters that span WAN.
Environmental preparation
1. Configure hosts files for three machines, hostname:
192.168.32.61 rabbitmq1 192.168.32.62 rabbitmq2 192.168.32.63 rabbitmq3
- Install erlang, unzip RabbitMQ installation file (same as single node installation)
- Synchronize Erlang cookies to ensure that three machines have the same Erlang cookies:
[root@rabbitmq1 ~]# cat ~/.erlang.cookie AIKMCIGHSUKFZUODBIUD[root@rabbitmq1 ~]#
Start RabbitMQ on all nodes, none of which are currently associated
[root@rabbitmq1 sbin]# ./rabbitmq-server -detached [root@rabbitmq2 sbin]# ./rabbitmq-server -detached [root@rabbitmq3 sbin]# ./rabbitmq-server -detached
Join nodes on rabbitmq2 and rabbitmq3 into the cluster where rabbitmq1 resides (currently single node)
Note: The join_cluster operation will reset the current node first, which will delete all data from the current RabbitMQ
- Operation on rabbitmq2
# Turn off RabbitMQ [root@rabbitmq2 sbin]# ./rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq2 ... # Join the cluster [root@rabbitmq2 sbin]# ./rabbitmqctl join_cluster rabbit@rabbitmq1 Clustering node rabbit@rabbitmq2 with rabbit@rabbitmq1 # Start RabbitMQ [root@rabbitmq2 sbin]# ./rabbitmqctl start_app Starting node rabbit@rabbitmq2 ... completed with 0 plugins. # View cluster status [root@rabbitmq2 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq2 ... [{nodes,[{disc,[rabbit@rabbitmq1,rabbit@rabbitmq2]}]}, {running_nodes,[rabbit@rabbitmq1,rabbit@rabbitmq2]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq1,[]},{rabbit@rabbitmq2,[]}]}]
- Do the same in rabbitmq3
[root@rabbitmq3 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq3 ... [{nodes,[{disc,[rabbit@rabbitmq3]}]}, {running_nodes,[rabbit@rabbitmq3]}, {cluster_name,<<"rabbit@rabbitmq3">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq3,[]}]}] [root@rabbitmq3 sbin]# ./rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq3 ... [root@rabbitmq3 sbin]# ./rabbitmqctl join_cluster rabbit@rabbitmq1 Clustering node rabbit@rabbitmq3 with rabbit@rabbitmq1 [root@rabbitmq3 sbin]# ./rabbitmqctl start_app Starting node rabbit@rabbitmq3 ... completed with 0 plugins. [root@rabbitmq3 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq3 ... [{nodes,[{disc,[rabbit@rabbitmq1,rabbit@rabbitmq2,rabbit@rabbitmq3]}]}, {running_nodes,[rabbit@rabbitmq1,rabbit@rabbitmq2,rabbit@rabbitmq3]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq1,[]},{rabbit@rabbitmq2,[]},{rabbit@rabbitmq3,[]}]}]
Cluster deployment complete
Cluster restart considerations:
When the entire cluster is brought down, the last node to go down must be the first node to be brought online. If this doesn't happen, the nodes will wait 30 seconds for the last disc node to come back online, and fail afterwards. If the last node to go offline cannot be brought back up, it can be removed from the cluster using the forget_cluster_node command consult the rabbitmqctl manpage for more information. If all cluster nodes stop in a simultaneous and uncontrolled manner (for example with a power cut) you can be left with a situation in which all nodes think that some other node stopped after them. In this case you can use the force_boot command on one node to make it bootable again - consult the rabbitmqctl manpage for more information.
Remove nodes from the cluster, method 1:
- Remove rabbitmq3 from cluster
[root@rabbitmq3 sbin]# ./rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq3 ... [root@rabbitmq3 sbin]# ./rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq3 ... [root@rabbitmq3 sbin]# ./rabbitmqctl start_app Starting node rabbit@rabbitmq3 ... completed with 0 plugins.
- View current node cluster status
[root@rabbitmq3 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq3 ... [{nodes,[{disc,[rabbit@rabbitmq3]}]}, {running_nodes,[rabbit@rabbitmq3]}, {cluster_name,<<"rabbit@rabbitmq3">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq3,[]}]}]
- View cluster status on other nodes
[root@rabbitmq1 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq1 ... [{nodes,[{disc,[rabbit@rabbitmq1,rabbit@rabbitmq2]}]}, {running_nodes,[rabbit@rabbitmq2,rabbit@rabbitmq1]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq2,[]},{rabbit@rabbitmq1,[]}]}]
Remove nodes from the cluster, method 2:
First add rabbitmq3 to the cluster from scratch.
- Turn off RabbitMQ on rabbitmq3
[root@rabbitmq3 sbin]# ./rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq3 ...
- Current cluster state (note running nodes)
[root@rabbitmq1 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq1 ... [{nodes,[{disc,[rabbit@rabbitmq1,rabbit@rabbitmq2,rabbit@rabbitmq3]}]}, {running_nodes,[rabbit@rabbitmq2,rabbit@rabbitmq1]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq2,[]},{rabbit@rabbitmq1,[]}]}]
- Remove rabbitmq3 from cluster
[root@rabbitmq1 sbin]# ./rabbitmqctl forget_cluster_node rabbit@rabbitmq3 Removing node rabbit@rabbitmq3 from the cluster
- Operation Result
[root@rabbitmq1 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq1 ... [{nodes,[{disc,[rabbit@rabbitmq1,rabbit@rabbitmq2]}]}, {running_nodes,[rabbit@rabbitmq2,rabbit@rabbitmq1]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq2,[]},{rabbit@rabbitmq1,[]}]}]
Note: Although rabbitmq3 has been removed from the cluster, the rabbitmq3 node itself does not know, so startup errors occur and a reset operation is required to start properly
Memory Node Operation
Current state, three nodes have nothing to do with each other and are in the initial state of the unconfigured cluster
- Start rabbitmq1
[root@rabbitmq1 sbin]# ./rabbitmq-server -detached Warning: PID file not written; -detached was passed. # Current state of cluster [root@rabbitmq1 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq1 ... [{nodes,[{disc,[rabbit@rabbitmq1]}]}, {running_nodes,[rabbit@rabbitmq1]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq1,[]}]}]
- rabbitmq2 joined as a memory node
[root@rabbitmq2 sbin]# ./rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq2 ... # --ram indicates that the current node is of type memory node [root@rabbitmq2 sbin]# ./rabbitmqctl join_cluster rabbit@rabbitmq1 --ram Clustering node rabbit@rabbitmq2 with rabbit@rabbitmq1
- View results
[root@rabbitmq2 sbin]# ./rabbitmqctl start_app Starting node rabbit@rabbitmq2 ... completed with 0 plugins. [root@rabbitmq2 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq2 ... [{nodes,[{disc,[rabbit@rabbitmq1]},{ram,[rabbit@rabbitmq2]}]}, {running_nodes,[rabbit@rabbitmq1,rabbit@rabbitmq2]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq1,[]},{rabbit@rabbitmq2,[]}]}]
- rabbitmq3 joined as a memory node
ellipsis
- View cluster status
[root@rabbitmq3 sbin]# ./rabbitmqctl start_app Starting node rabbit@rabbitmq3 ... completed with 0 plugins. [root@rabbitmq3 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq3 ... [{nodes,[{disc,[rabbit@rabbitmq1]},{ram,[rabbit@rabbitmq3,rabbit@rabbitmq2]}]}, {running_nodes,[rabbit@rabbitmq2,rabbit@rabbitmq1,rabbit@rabbitmq3]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq2,[]},{rabbit@rabbitmq1,[]},{rabbit@rabbitmq3,[]}]}]
- Modify rabbitmq2 as disk node
[root@rabbitmq2 sbin]# ./rabbitmqctl stop_app Stopping rabbit application on node rabbit@rabbitmq2 ... [root@rabbitmq2 sbin]# ./rabbitmqctl change_cluster_node_type disc Turning rabbit@rabbitmq2 into a disc node [root@rabbitmq2 sbin]# ./rabbitmqctl start_app Starting node rabbit@rabbitmq2 ... completed with 0 plugins. [root@rabbitmq2 sbin]# ./rabbitmqctl cluster_status Cluster status of node rabbit@rabbitmq2 ... [{nodes,[{disc,[rabbit@rabbitmq1,rabbit@rabbitmq2]},{ram,[rabbit@rabbitmq3]}]}, {running_nodes,[rabbit@rabbitmq1,rabbit@rabbitmq3,rabbit@rabbitmq2]}, {cluster_name,<<"rabbit@rabbitmq1">>}, {partitions,[]}, {alarms,[{rabbit@rabbitmq1,[]},{rabbit@rabbitmq3,[]},{rabbit@rabbitmq2,[]}]}]