I. Environmental Description
1. Server information
172.21.184.43 kafka,zk 172.21.184.44 kafka,zk 172.21.184.45 kafka,zk 172.21.244.7 ansible
2. Software Version Information
System: CentOS Linux release 7.5.1804 (Core) kafka: kafka_2.11-2.2.0 Zookeeper version: 3.4.8 ansible: ansible 2.7.10
II. Configuration preparation
1. Write playbook-related configuration files, first tree look at the entire directory structure
tree . ├── kafka │ ├── group_vars │ │ └── kafka │ ├── hosts │ ├── kafkainstall.yml │ └── templates │ ├── server.properties-1.j2 │ ├── server.properties-2.j2 │ ├── server.properties-3.j2 │ └── server.properties.j2 └── zookeeper ├── group_vars │ └── zook ├── hosts ├── templates │ └── zoo.cfg.j2 └── zooKeeperinstall.yml
2. Establishing relevant catalogues
mkdir /chj/ansibleplaybook/kafka/group_vars -p mkdir /chj/ansibleplaybook/kafka/templates mkdir /chj/ansibleplaybook/zookeeper/group_vars -p mkdir /chj/ansibleplaybook/zookeeper/templates
3. Write configuration files for deploying zookeeper
A. zookeeper's group_vars file
vim /chj/ansibleplaybook/zookeeper/group_vars/zook --- zk01server: 172.21.184.43 zk02server: 172.21.184.44 zk03server: 172.21.184.45 zookeeper_group: work zookeeper_user: work zookeeper_dir: /chj/data/zookeeper zookeeper_appdir: /chj/app/zookeeper zk01myid: 43 zk02myid: 44 zk03myid: 45
B, zookeeper templates file
vim /chj/ansibleplaybook/zookeeper/templates/zoo.cfg.j2 tickTime=2000 initLimit=500 syncLimit=20 dataDir={{ zookeeper_dir }} dataLogDir=/chj/data/log/zookeeper/ clientPort=10311 maxClientCnxns=1000000 server.{{ zk01myid }}={{ zk01server }}:10301:10331 server.{{ zk02myid }}={{ zk02server }}:10302:10332 server.{{ zk03myid }}={{ zk03server }}:10303:10333
C, zookeeper's host file
vim /chj/ansibleplaybook/zookeeper/hosts [zook] 172.21.184.43 172.21.184.44 172.21.184.45
The yml file for the installation of D and zookeeper
vim /chj/ansibleplaybook/zookeeper/zooKeeperinstall.yml --- - hosts: "zook" gather_facts: no tasks: - name: Create zookeeper group group: name: '{{ zookeeper_group }}' state: present tags: - zookeeper_user - name: Create zookeeper user user: name: '{{ zookeeper_user }}' group: '{{ zookeeper_group }}' state: present createhome: no tags: - zookeeper_group - name: Check whether it's safe or not zk stat: path: /chj/app/zookeeper register: node_files - debug: msg: "{{ node_files.stat.exists }}" - name: Check for presence java Environmental Science shell: if [ ! -f "/usr/local/jdk/bin/java" ];then echo "Create directory"; curl -o /usr/local/jdk1.8.0_121.tar.gz http://Download.pkg.pkg.chj.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf/usr/local/jdk1.8.0_121.tar.tar xf/usr/local/jdk1.8.0_121; cd/usr/local/jdk1.8.chj.cloud/chj_jdk1.chj_jdk1.8.chj_jdk1.8.8.8.0_121 jdk; ln-s/usr/usr/local/jdk/bin/java/usln/usr/jdk/local/bin/java/java; tarxf/usr/usr/usr/Fi - name: Download decompression chj_zookeeper unarchive: src=http://ops.chehejia.com:9090/pkg/zookeeper.tar.gz dest=/chj/app/ copy=no when: node_files.stat.exists == False register: unarchive_msg - debug: msg: "{{ unarchive_msg }}" - name: Establish zookeeper Data directory and log directory shell: if [ ! -d "/chj/data/zookeeper" ] && [ ! -d "/chj/data/log/zookeeper" ];then echo "Create directory"; mkdir -p /chj/data/{zookeeper,zookeeperLog} ; else echo "directory already exists\n" ;fi - name: Modify directory permissions shell: chown work:work -R /chj/{data,app} when: node_files.stat.exists == False - name: To configure zk myid shell: "hostname -i| cut -d '.' -f 4|awk '{print $1}' > /chj/data/zookeeper/myid" - name: Config zookeeper service template: src: zoo.cfg.j2 dest: /chj/app/zookeeper/conf/zoo.cfg mode: 0755 - name: Reload systemd command: systemctl daemon-reload - name: Restart ZooKeeper service shell: sudo su - work -c "/chj/app/zookeeper/console start" - name: Status ZooKeeper service shell: "sudo su - work -c '/chj/app/zookeeper/console status'" register: zookeeper_status_result ignore_errors: True - debug: msg: "{{ zookeeper_status_result }}"
4. Write configuration files for deploying kafka
A, kafka group_vars file
vim /chj/ansibleplaybook/kafka/group_vars/kafka --- kafka01: 172.21.184.43 kafka02: 172.21.184.44 kafka03: 172.21.184.45 kafka_group: work kafka_user: work log_dir: /chj/data/kafka brokerid1: 1 brokerid2: 2 brokerid3: 3 zk_addr: 172.21.184.43:10311,172.21.184.44:10311,172.21.184.45:10311/kafka
B, kafka templates file
vim /chj/ansibleplaybook/kafka/templates/server.properties-1.j2 broker.id={{ brokerid1 }} ##server.properties-2.j2 and server.properties-3.j2 are configured as brokerid 2 and brokerid3, respectively. auto.create.topics.enable=false auto.leader.rebalance.enable=true broker.rack=/default-rack compression.type=snappy controlled.shutdown.enable=true controlled.shutdown.max.retries=3 controlled.shutdown.retry.backoff.ms=5000 controller.message.queue.size=10 controller.socket.timeout.ms=30000 default.replication.factor=1 delete.topic.enable=true fetch.message.max.bytes=10485760 fetch.purgatory.purge.interval.requests=10000 leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 host.name= {{ kafka01 }} listeners=PLAINTEXT://{kafka01}: 9092# server. properties-2.j2 and server.properties-3.j2 are configured as brokerid 2 and brokerid3, respectively log.cleanup.interval.mins=1200 log.dirs= {{ log_dir}} log.index.interval.bytes=4096 log.index.size.max.bytes=10485760 log.retention.bytes=-1 log.retention.hours=168 log.roll.hours=168 log.segment.bytes=1073741824 message.max.bytes=10000000 min.insync.replicas=1 num.io.threads=8 num.network.threads=3 num.partitions=1 num.recovery.threads.per.data.dir=1 num.replica.fetchers=1 offset.metadata.max.bytes=4096 offsets.commit.required.acks=-1 offsets.commit.timeout.ms=5000 offsets.load.buffer.size=5242880 offsets.retention.check.interval.ms=600000 offsets.retention.minutes=86400000 offsets.topic.compression.codec=0 offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=1 offsets.topic.segment.bytes=104857600 port=9092 producer.purgatory.purge.interval.requests=10000 queued.max.requests=500 replica.fetch.max.bytes=10485760 replica.fetch.min.bytes=1 replica.fetch.wait.max.ms=500 replica.high.watermark.checkpoint.interval.ms=5000 replica.lag.max.messages=4000 replica.lag.time.max.ms=10000 replica.socket.receive.buffer.bytes=65536 replica.socket.timeout.ms=30000 sasl.enabled.mechanisms=GSSAPI sasl.mechanism.inter.broker.protocol=GSSAPI socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 socket.send.buffer.bytes=102400 zookeeper.connect= {{ zk_addr }} zookeeper.connection.timeout.ms=25000 zookeeper.session.timeout.ms=30000 zookeeper.sync.time.ms=2000 group.initial.rebalance.delay.ms=10000
C, kafka host file
vim /chj/ansibleplaybook/kafka/hosts [kafka] 172.21.184.43 172.21.184.44 172.21.184.45
The yml file of the installation of D and kafka
vim /chj/ansibleplaybook/kafka/kafkainstall.yml --- - hosts: "kafka" gather_facts: yes tasks: - name: obtain eth0 ipv4 address debug: msg={{ ansible_default_ipv4.address }} when: ansible_default_ipv4.alias == "eth0" - name: Create kafka group group: name: '{{ kafka_group }}' state: present tags: - kafka_user - name: Create kafka user user: name: '{{ kafka_user }}' group: '{{ kafka_group }}' state: present createhome: no tags: - kafka_group - name: Check whether it's safe or not zk stat: path: /chj/app/kafka register: node_files - debug: msg: "{{ node_files.stat.exists }}" - name: Check for presence java Environmental Science shell: if [ ! -f "/usr/local/jdk/bin/java" ];then echo "Create directory"; curl -o /usr/local/jdk1.8.0_121.tar.gz http://Download.pkg.pkg.chj.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf/usr/local/jdk1.8.0_121.tar.tar xf/usr/local/jdk1.8.0_121; cd/usr/local/jdk1.8.chj.cloud/chj_jdk1.chj_jdk1.8.chj_jdk1.8.8.8.0_121 jdk; ln-s/usr/usr/local/jdk/bin/java/usln/usr/jdk/local/bin/java/java; tarxf/usr/usr/usr/Fi - name: Download decompression kafka unarchive: src=http://ops.chehejia.com:9090/pkg/kafka.tar.gz dest=/chj/app/ copy=no when: node_files.stat.exists == False register: unarchive_msg - debug: msg: "{{ unarchive_msg }}" - name: Establish kafka Data directory and log directory shell: if [ ! -d "/chj/data/kafka" ] && [ ! -d "/chj/data/log/kafka" ];then echo "Create directory"; mkdir -p /chj/data/{kafka,log/kafka} ; else echo "directory already exists\n" ;fi - name: Modify directory permissions shell: chown work:work -R /chj/{data,app} when: node_files.stat.exists == False - name: Config kafka01 service template: src: server.properties-1.j2 dest: /chj/app/kafka/config/server.properties mode: 0755 when: ansible_default_ipv4.address == "172.21.184.43" - name: Config kafka02 service template: src: server.properties-2.j2 dest: /chj/app/kafka/config/server.properties mode: 0755 when: ansible_default_ipv4.address == "172.21.184.44" - name: Config kafka03 service template: src: server.properties-3.j2 dest: /chj/app/kafka/config/server.properties mode: 0755 when: ansible_default_ipv4.address == "172.21.184.45" - name: Reload systemd command: systemctl daemon-reload - name: Restart kafka service shell: sudo su - work -c "/chj/app/kafka/console start" - name: Status kafka service shell: "sudo su - work -c '/chj/app/kafka/console status'" register: kafka_status_result ignore_errors: True - debug: msg: "{{ kafka_status_result }}"
PS: Install jdk, kafka, zk binary packages that need to be used and replace them with accessible download addresses
Three, deployment
1. Deploy zookeeper cluster first
cd /chj/ansibleplaybook/zookeeper/ ansible-playbook -i hosts zooKeeperinstall.yml -b
2. Deploying kafka cluster
cd /chj/ansibleplaybook/kafka/ ansible-playbook -i hosts kafkainstall.yml -b