Kafka+Zookeeper cluster construction

Keywords: Zookeeper kafka JDK Apache

Environmental preparation

Installation steps

One: configure JDK and environment variables

Open the current user environment variable file
vi ~/.bash_profile // Take the current user as an example
Configure current user environment variables
export JAVA_HOME=/home/work/services/jdk1.8.0_131
export PATH=${JAVA_HOME}/bin:${PATH}
Reload the current user environment variable file
source ~/.bash_profile
Verification
java -version

II. Build Zookeeper cluster

Take machine A for example

Decompression package
cd /home/work/dcs // Switch to download directory
tar -zxvf zookeeper-3.4.13.tar.gz // Decompression package
Create log file directory and data file directory
mkdir ./zookeeper-3.4.13/logs // Log storage directory
mkdir ./zookeeper-3.4.13/data // Data storage directory
Open profile
cd zookeeper-3.4.13/conf // Switch to configuration directory
cp zoo_sample.cfg zoo.cfg // Copy a new profile based on the sample profile
vi zoo.cfg
Modify profile
// Modification part
dataDir=/home/work/dcs/zookeeper-3.4.13/data // Modify data directory
// Add part
dataLogDir=/home/work/dcs/zookeeper-3.4.13/logs // Add log directory configuration
server.0=192.168.1.1:2888:3888
server.1=192.168.1.2:2888:3888
server.2=192.168.1.3:2888:3888
New myid file
cd zookeeper-3.4.13/data // Switch to the data directory
echo "0" > myid // id content corresponds to * in server. * above
Startup and shutdown
// start-up
./bin/zkServer.sh start // start-up
// Close
./bin/zkServer.sh stop // Close

Machine B, C configuration is the same as A

III. building Kafka cluster

Take machine A for example

Decompression package
cd /home/work/dcs // Switch to download directory
tar -zxvf kafka_2.10-0.10.2.0.tgz // Decompression package
Create data log file directory
mkdir ./kafka_2.10-0.10.2.0/data // Data log storage directory
Open profile
cd kafka_2.10-0.10.2.0/config // Switch to profile directory
vi server.properties // Open profile
Modify profile
// Modified part (A: 192.168.1.1)
log.dirs=/home/work/dcs/kafka_2.10-0.10.2.0/data // Modify data log directory
broker.id=0 // Ensure uniqueness in the cluster
listeners=PLAINTEXT://192.168.1.1:9092
advertised.listeners=PLAINTEXT://192.168.1.1:9092
zookeeper.connect=192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 // Address of zookeeper
// Modified part (B: 192.168.1.2)
log.dirs=/home/work/dcs/kafka_2.10-0.10.2.0/data // Modify data log directory
broker.id=1 // Ensure uniqueness in the cluster
listeners=PLAINTEXT://192.168.1.2:9092
advertised.listeners=PLAINTEXT://192.168.1.2:9092
zookeeper.connect=192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 // Address of zookeeper
// Modified part (C: 192.168.1.3)
log.dirs=/home/work/dcs/kafka_2.10-0.10.2.0/data // Modify data log directory
broker.id=2 // Ensure uniqueness in the cluster
listeners=PLAINTEXT://192.168.1.3:9092
advertised.listeners=PLAINTEXT://192.168.1.3:9092
zookeeper.connect=192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 // Address of zookeeper
Startup and shutdown
// start-up
./bin/kafka-server-start.sh config/server.properties // Start - general mode
nohup ./bin/kafka-server-start.sh config/server.properties > /dev/null 2>&1 & // Start - daemons mode
// Close
./bin/kafka-server-stop.sh // Close

Machine B, C configuration is the same as A

Verification
jps // Verify process exists

Four: testing

// Create topic
sh kafka-topics.sh --create --zookeeper 192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 --replication-factor 1 --partitions 3 --topic test

// Query topic list
sh kafka-topics.sh --zookeeper 192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 --list

// Query the details of a topic
sh kafka-topics.sh --zookeeper 10.101.22.182:8081,10.101.22.182:8082,10.101.22.182:8083 --describe --topic test

// Simulate client production
bin/kafka-console-producer.sh --broker-list 192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 --topic test

// Impersonate client subscription
sh kafka-console-consumer.sh --zookeeper 192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 --topic test --from-beginning

// Delete topic
sh kafka-topics.sh --delete --zookeeper 192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 --topic test

V. precautions

  • Before starting Zookeeper, make sure that JDK is installed on the machine
  • All Zookeeper must be started before Kafka can be started
  • When starting Kafka cluster, it is better to use the daemons mode, otherwise the program will automatically shut down when the terminal exits

Posted by NuLL[PL] on Tue, 10 Dec 2019 14:49:00 -0800