Main references https://www.cnblogs.com/00986014w/p/9561901.html This blog post, but he's not using the official image of zookeeper
I use the Zookeeper cluster with three nodes, which can be modified by reference.
##Build Zookeeper cluster
zookeeper-svc.yaml of cluster Service
apiVersion: v1 kind: Service metadata: labels: app: zookeeper-cluster-service-1 name: zookeeper-cluster1 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-cluster-service-1 --- apiVersion: v1 kind: Service metadata: labels: app: zookeeper-cluster-service-2 name: zookeeper-cluster2 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-cluster-service-2 --- apiVersion: v1 kind: Service metadata: labels: app: zookeeper-cluster-service-3 name: zookeeper-cluster3 spec: ports: - name: client port: 2181 protocol: TCP - name: follower port: 2888 protocol: TCP - name: leader port: 3888 protocol: TCP selector: app: zookeeper-cluster-service-3
Create three services through sudo kubectl create -f zookeeper-svc.yaml.
###zookeeper-deployment.yaml of cluster Deployment
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: zookeeper-cluster-service-1 name: zookeeper-cluster-1 spec: replicas: 1 template: metadata: labels: app: zookeeper-cluster-service-1 name: zookeeper-cluster-1 spec: containers: - image: zookeeper imagePullPolicy: IfNotPresent name: zookeeper-cluster-1 ports: - containerPort: 2181 env: - name: ZOO_MY_ID value: "1" - name: ZOO_SERVERS value: "server.1=0.0.0.0:2888:3888 server.2=zookeeper-cluster2:2888:3888 server.3=zookeeper-cluster3:2888:3888" --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: zookeeper-cluster-service-2 name: zookeeper-cluster-2 spec: replicas: 1 template: metadata: labels: app: zookeeper-cluster-service-2 name: zookeeper-cluster-2 spec: containers: - image: zookeeper imagePullPolicy: IfNotPresent name: zookeeper-cluster-2 ports: - containerPort: 2181 env: - name: ZOO_MY_ID value: "2" - name: ZOO_SERVERS value: "server.1=zookeeper-cluster1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zookeeper-cluster3:2888:3888" --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: zookeeper-cluster-service-1 name: zookeeper-cluster-3 spec: replicas: 1 template: metadata: labels: app: zookeeper-cluster-service-3 name: zookeeper-cluster-3 spec: containers: - image: zookeeper imagePullPolicy: IfNotPresent name: zookeeper-cluster-3 ports: - containerPort: 2181 env: - name: ZOO_MY_ID value: "3" - name: ZOO_SERVERS value: "server.1=zookeeper-cluster1:2888:3888 server.2=zookeeper-cluster2:2888:3888 server.3=0.0.0.0:2888:3888"
Create three deployments through sudo kubectl create -f zookeeper-deployment.yaml.
###Check whether the cluster starts successfully
After three pods are in Running status, check whether there are errors in the log through sudo kubectl log zookeeper-cluster-1-xxxxx.
Then through sudo kubectl exec -it zookeeper-cluster-1-676df4686f-c7b6d /bin/bash to enter two pods respectively, execute / bin/zkCli.sh to create and check respectively, and try to see if it can succeed.
##Building Kafka cluster
Kafka svc.yaml of cluster Service
apiVersion: v1 kind: Service metadata: name: kafka-cluster1 labels: app: kafka-cluster-1 spec: type: NodePort ports: - port: 9092 name: kafka-cluster-1 targetPort: 9092 nodePort: 30091 protocol: TCP selector: app: kafka-cluster-1 --- apiVersion: v1 kind: Service metadata: name: kafka-cluster2 labels: app: kafka-cluster-2 spec: type: NodePort ports: - port: 9092 name: kafka-cluster-2 targetPort: 9092 nodePort: 30092 protocol: TCP selector: app: kafka-cluster-2 --- apiVersion: v1 kind: Service metadata: name: kafka-cluster3 labels: app: kafka-cluster-3 spec: type: NodePort ports: - port: 9092 name: kafka-cluster-3 targetPort: 9092 nodePort: 30093 protocol: TCP selector: app: kafka-cluster-3
Create three services through sudo kubectl create -f kafka-svc.yaml.
###kafka-deployment.yaml of cluster Deployment
Note that in env, change Kafka ﹣ advanced ﹣ host ﹣ name to the ClusterIP of the corresponding Service of each Pod.
PS: if the service of zookeeper is different from that defined by me, it corresponds to the modification of Kafka ﹣ zookeeper ﹣ connect
kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kafka-cluster-1 spec: replicas: 1 selector: matchLabels: name: kafka-cluster-1 template: metadata: labels: name: kafka-cluster-1 app: kafka-cluster-1 spec: containers: - name: kafka-cluster-1 image: wurstmeister/kafka imagePullPolicy: IfNotPresent ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST_NAME value: "[zookeeper-cluster1 Of ClusterIP]" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181 - name: KAFKA_BROKER_ID value: "1" --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kafka-cluster-2 spec: replicas: 1 selector: matchLabels: name: kafka-cluster-2 template: metadata: labels: name: kafka-cluster-2 app: kafka-cluster-2 spec: containers: - name: kafka-cluster-2 image: wurstmeister/kafka imagePullPolicy: IfNotPresent ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST_NAME value: "[zookeeper-cluster2 Of ClusterIP]" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181 - name: KAFKA_BROKER_ID value: "2" --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kafka-cluster-3 spec: replicas: 1 selector: matchLabels: name: kafka-cluster-3 template: metadata: labels: name: kafka-cluster-3 app: kafka-cluster-3 spec: containers: - name: kafka-cluster-3 image: wurstmeister/kafka imagePullPolicy: IfNotPresent ports: - containerPort: 9092 env: - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ADVERTISED_HOST_NAME value: "[zookeeper-cluster3 Of ClusterIP]" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181 - name: KAFKA_BROKER_ID value: "3"
Create three deployments through sudo kubectl create -f zookeeper-deployment.yaml.
###Check whether the cluster starts successfully
After the three pods become Running status, check whether there are errors in the log through sudo kubectl log kafka-cluster-1-xxxxx.
Then through sudo kubectl exec - it - sudo kubectl exec - it Kafka cluster-1-558747bc7d-5n94p / bin / bash to enter Pod, execute kafka-console-producer.sh -- broker list [cluster IP of zookeeper-cluster1]: 9092 -- topic test to create topic test.
Enter Pod through sudo kubectl exec -it kafka-cluster-2-66c88f759b-8wlvp /bin/bash, execute kafka-console-consumer.sh -- bootstrap server [cluster IP of zookeeper-cluster2]: 9092 -- topic test -- from beginning to receive the message of topic test.
Then try sending messages in cluster-1 to see if they can be received in cluster-2.