RocketMQ series environment construction

Keywords: CentOS Java JDK vim

The basic concepts of RocketMQ are introduced to you in the previous article. In this section, we will introduce environment construction. The most basic thing in RocketMQ is NameServer. Let's see how it is built first.

NameServer

RocketMQ requires an environment above JDK8. Let's check the environment first,

[root@centOS-1 ~]# java -version
openjdk version "11.0.3" 2019-04-16 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.3+7-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.3+7-LTS, mixed mode, sharing)

My machine doesn't install JDK deliberately, but OpenJDK 11, which is also no problem. Then we download the latest installation package from RocketMQ official website and upload it to / opt directory,

[root@centOS-1 opt]# ll
-rw-r--r--.  1 root  root 13838456 6 March 8:49 rocketmq-all-4.7.0-bin-release.zip

Then we unzip the zip package,

[root@centOS-1 opt]# unzip rocketmq-all-4.7.0-bin-release.zip

The unzip command is used here. If you don't have this command on your machine, you can use yum install to install one. After decompression, enter the main directory of RocketMQ and start NameServer.

[root@centOS-1 opt]# cd rocketmq-all-4.7.0-bin-release
[root@centOS-1 rocketmq-all-4.7.0-bin-release]# ./bin/mqnamesrv
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Unrecognized VM option 'UseCMSCompactAtFullCollection'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

There is an error here: could not create the Java virtual machine. This is because the startup files of RocketMQ are configured according to JDK8, and we use OpenJDK11 here, which is caused by many command parameters that are not supported. If the partners use JDK8, there is no problem in normal startup.

Here we change the startup file of RocketMQ,

[root@centOS-1 rocketmq-all-4.7.0-bin-release]# vim bin/runserver.sh 
export JAVA_HOME
export JAVA="$JAVA_HOME/bin/java"
export BASE_DIR=$(dirname $0)/..
#Add the lib directory of RocketMQ in CLASSPATH
#export CLASSPATH=.:${BASE_DIR}/conf:${CLASSPATH}
export CLASSPATH=.:${BASE_DIR}/lib/*:${BASE_DIR}/conf:${CLASSPATH}

At the end of this file, we annotated several parameters that are not supported after upgrading the JDK,

JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
#JAVA_OPT="${JAVA_OPT} -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:SurvivorRatio=8  -XX:-UseParNewGC"
JAVA_OPT="${JAVA_OPT} -verbose:gc -Xloggc:${GC_LOG_DIR}/rmq_srv_gc_%p_%t.log -XX:+PrintGCDetails"
#JAVA_OPT="${JAVA_OPT} -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m"
JAVA_OPT="${JAVA_OPT} -XX:-OmitStackTraceInFastThrow"
JAVA_OPT="${JAVA_OPT} -XX:-UseLargePages"
#JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${JAVA_HOME}/jre/lib/ext:${BASE_DIR}/lib"
#JAVA_OPT="${JAVA_OPT} -Xdebug -Xrunjdwp:transport=dt_socket,address=9555,server=y,suspend=n"
JAVA_OPT="${JAVA_OPT} ${JAVA_OPT_EXT}"
JAVA_OPT="${JAVA_OPT} -cp ${CLASSPATH}"

OK, after the modification, we save and exit and start again. This time, we start NameServer in the background,

[root@centOS-1 rocketmq-all-4.7.0-bin-release]# nohup ./bin/mqnamesrv &

[root@centOS-1 rocketmq-all-4.7.0-bin-release]# tail -500f ~/logs/rocketmqlogs/namesrv.log 

Then check the log, and see main - The Name Server boot success. serializeType=JSON in the log, indicating that the NameServer is started successfully.

The single point NameServer can't meet our requirements. How can we make a cluster. NameServer is a stateless service, and there is no data exchange between nodes. Therefore, the cluster construction of NameServer does not need any configuration, only needs to start multiple NameServer services. Unlike Zookeeper cluster construction, it needs to configure each node. Here we start three NameServer nodes, corresponding to our three machines, 192.168.73.130192.168.73.131192.168.73.132.

Broker

After the NameServer cluster is set up, we will set up a Broker. As for Broker, we need to build a two master and two slave structure. The asynchronous backup between the master and slave is also used to save the disk. If you don't know how to synchronize and save disks, take a look at the previous section. For the configuration of asynchronous two master two slave structure, there are already examples in RocketMQ. Let's start with the configuration file.

[root@centOS-1 rocketmq-all-4.7.0-bin-release]# vim conf/2m-2s-async/broker-a.properties 

This configuration file is the "master" configuration file of broker-a,

brokerClusterName=RocketMQ-Cluster
brokerName=broker-a
brokerId=0
deleteWhen=04
fileReservedTime=48
brokerRole=ASYNC_MASTER
flushDiskType=ASYNC_FLUSH

Among them,

  • Brokercluster name is the name of MQ cluster. We changed it to rocketmq cluster.
  • brokerName is the name of the queue, configured as broker-a.
  • brokerId is the id of the queue, 0 represents "master", and other positive integers represent "slave".
  • deleteWhen=04 means the commitLog has expired and will be deleted.
  • fileReservedTime is the expiration time of commitLog, in hours. The configured time here is 48 hours.
  • Broker role, role of queue, ASYNC_MASTER is an asynchronous master.
  • flushDiskType, the way to save the disk, asynchronous saving.

Take a look at the slave configuration of broker-a,

brokerClusterName=RocketMQ-Cluster
brokerName=broker-a
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH

Among them, the cluster name is the same, the queue name is the same, but the brokerId and brokerole are different. The configuration here represents that it is the "slave" of queue broker-a. The configuration of broker-b is the same as that of broker-a, except that the brokerName is different. It will not be posted here.

The configuration files of both the master and the slave have been configured. Let's plan. Our NameServer is three 192.168.73.130192.168.73.131192.168.73.132. The broker is deployed as follows:

  • broker-a (main): 192.168.73.130
  • broker-a (from): 192.168.73.131
  • broker-b (main): 192.168.73.131
  • broker-b (from): 192.168.73.130

Next, we start broker, and start broker-a (Master) and broker-b (slave) on 192.168.73.130. As with NameServer, we need to modify the startup script, otherwise an error will be reported. What we have changed is runbroker.sh In this file, the modified content is the same as the previous one. I won't go into details here. In the startup file, the memory size is 8g. If the machine has insufficient memory, you can reduce the memory appropriately.

Here's another explanation. Since we start two broker instances on one machine, there will be conflicts between the listening port and the log storage path. Then, in the broker-b (slave) configuration file of 192.168.73.130, add the configuration as follows:

brokerClusterName=RocketMQ-Cluster
brokerName=broker-b
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH

listenPort=11911
storePathRootDir=~/store-b                       

The port of broker-b (from) is changed to 11911, which is different from the default 10911; the storePathRootDir is changed to ~ / store-b, which is different from the default ~ / store.

The broker-a (slave) in 192.168.73.131 should also be modified as follows:

brokerClusterName=RocketMQ-Cluster
brokerName=broker-a
brokerId=1
deleteWhen=04
fileReservedTime=48
brokerRole=SLAVE
flushDiskType=ASYNC_FLUSH

listenPort=11911
storePathRootDir=~/store-a

Then, we start on 192.168.73.130, as follows:,

nohup ./bin/mqbroker -c conf/2m-2s-async/broker-a.properties -n '192.168.73.130:9876;192.168.73.131:9876;192.168.73.132:9876' &

nohup ./bin/mqbroker -c conf/2m-2s-async/broker-b-s.properties -n '192.168.73.130:9876;192.168.73.131:9876;192.168.73.132:9876' &

  • -c specifies the configuration file, broker-a (Master) and broker-b (slave), respectively.
  • -n specifies the address of NameServer. Three are specified, separated by.

Start on 192.168.73.131, as follows:,

nohup ./bin/mqbroker -c conf/2m-2s-async/broker-b.properties -n '192.168.73.130:9876;192.168.73.131:9876;192.168.73.132:9876' &

nohup ./bin/mqbroker -c conf/2m-2s-async/broker-a-s.properties -n '192.168.73.130:9876;192.168.73.131:9876;192.168.73.132:9876' &

OK, if there is no error, the cluster will be built successfully here. There is a small pit here. Please note that the address after - n must be enclosed by '' and between addresses; otherwise, we can't see it when looking at the cluster list.

mqadmin

The cluster has been set up. We can check the status of the cluster and check the status of the cluster. We can use mqadmin with the following commands:

./bin/mqadmin clusterlist -n '192.168.73.130:9876;192.168.73.131:9876;192.168.73.132:9876'
  • clusterlist is a command to view clusters
  • -n followed by the address of NameServer. Note that it should also be enclosed with '' and separated with; between addresses

The results are as follows:

#Cluster Name     #Broker Name            #BID  #Addr                  #Version                #InTPS(LOAD)       #OutTPS(LOAD) #PCWait(ms) #Hour #SPACE
RocketMQ-Cluster  broker-a                0     192.168.73.130:10911   V4_7_0                   0.00(0,0ms)         0.00(0,0ms)          0 442039.47 -1.0000
RocketMQ-Cluster  broker-a                1     192.168.73.131:11911   V4_7_0                   0.00(0,0ms)         0.00(0,0ms)          0 442039.47 0.2956
RocketMQ-Cluster  broker-b                0     192.168.73.131:10911   V4_7_0                   0.00(0,0ms)         0.00(0,0ms)          0 442039.47 0.2956
RocketMQ-Cluster  broker-b                1     192.168.73.130:11911   V4_7_0                   0.00(0,0ms)         0.00(0,0ms)          0 442039.47 -1.0000

We can see that in the NameServer center, there is only one broker cluster, rocketmq cluster. There are two brokers, broker-a and broker-b, and each broker has a master-slave. We can also see the ip address of the broker.

OK ~ the RocketMQ cluster is set up here. Leave a message in the problem comment area~~

Posted by manny on Sat, 06 Jun 2020 01:14:45 -0700