Kafka self built cluster synchronizes data to Alibaba cloud Kafka Standard Edition through MirrorMaker

Keywords: Big Data kafka SSL VPC network

Explain:
1. Only two topic s are synchronized this time, and subsequent optimization will continue to update...
2. Self built cluster CDH5.8, kafka2.1.0; Alibaba cloud cluster Standard Version kafka0.10.x
Trample:
1. Add the CMM of kafka role instance in cdh, which should not support SSL connection.
2. VPC network access. I don't know that the purchased alicloud instance has VPC network. This is a connection without SSL encryption.
3. mirrormaker of kafka0.10.2 cannot connect to self built cluster.
4. The Alibaba cloud control prompt is the SSl access point, and the actual verification method needs SASL_SSL.
5. I don't know java. I don't know where to add this to export Kafka ﹣ opts = "- DJava. Security. Auth. Login. Config = Kafka ﹣ client ﹣ JAAS. Conf"
6,ssl.truststore.location=kafka.client.truststore.jks
ssl.truststore.password=KafkaOnsClient, the certificate needs to specify the path, and the password is fixed. I used another password.
Get ready:
1. Download kafka_2.12-2.2.1.tgz, a small version higher than that recommended by Alibaba cloud.
2. To download kafka.client.truststore.jks, you need to ask alicloud for it, or there is a download link in the document provided by alicloud.
3. Manually create the Kafka? Client? Jaas.conf file, and the content will be pasted below.
Deployment:
1. The server can access the 9092 of the self built cluster and the 9093 of the Alibaba cloud cluster.
2. Upload and unzip kafka ﹣ 2.12-2.2.1.tgz (do not configure zoogeepr here, do not need to start kafka)
3. Create a new kafka ﹣ client ﹣ jaas.conf file in the config directory (the extract directory of kafka)
4. Create a new directory cert and upload kafka.client.truststore.jks Certificate (Kafka's decompression directory)
5. At the bottom of vim /erc/profile, add export Kafka ﹣ opts = "- DJava. Security. Auth. Login. Config = XXXXXX / Kafka ﹣ client ﹣ JAAS. Conf" (the actual directory is required here)
6. Edit Kafka · client · jaas.conf, consumer.properties and producer.properties.
7. Start nohup bin / Kafka mirror maker.sh -- consumer.config config / consumer.properties -- producer.config config config / producer.properties -- whitelist ais aismysql, AIS Ç FWP & (running in the background)
8. Check whether there is a message in the target topic
Profile content
1,kafka_client_jaas.conf

 #Users and passwords here are obtained from Alibaba cloud console
 KafkaClient {
     org.apache.kafka.common.security.plain.PlainLoginModule required
     username="xxxxxx"
     password="xxxxxx";
 };

2,consumer.properties

  # list of brokers used for bootstrapping knowledge about the rest of the cluster
  #format: host1:port1,host2:port2 ...
  bootstrap.servers=Self built cluster ip:9092
  #consumer group id
  group.id=test-consumer-group
  #Consumer partition allocation strategy
  partition.assignment.strategy=org.apache.kafka.clients.consumer.RoundRobinAssignor
  #What to do when there is no initial offset in Kafka or if the current
  #offset does not exist any more on the server: latest, earliest, none
  #auto.offset.reset=

3,producer.properties

############################# Producer Basics #############################

# list of brokers used for bootstrapping knowledge about the rest of the cluster
# format: host1:port1,host2:port2 ...
bootstrap.servers=Alicloud cluster ip:9093

# specify the compression codec for all data generated: none, gzip, snappy, lz4, zstd
compression.type=none

# name of the partitioner class for partitioning events; default partition spreads data randomly
#partitioner.class=

# the maximum amount of time the client will wait for the response of a request
#request.timeout.ms=

# how long `KafkaProducer.send` and `KafkaProducer.partitionsFor` will block for
#max.block.ms=

# the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together
#linger.ms=

# the maximum size of a request in bytes
#max.request.size=

# the default batch size in bytes when batching multiple records sent to a partition
#batch.size=

# the total bytes of memory the producer can use to buffer records waiting to be sent to the server
#buffer.memory=
ssl.truststore.location=/application/kafka/cert/kafka.client.truststore.jks
ssl.truststore.password=KafkaOnsClient
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.endpoint.identification.algorithm=
#The last line is that the version of kafka is higher than 2.x.x.

Posted by midgar777 on Thu, 17 Oct 2019 21:15:50 -0700