WARN [Consumer clientId=consumer-1, groupId=console-consumer-55928] 1 partitions have leader brokers
I. problem description
Colleagues reported that in our three node kafka cluster, when one of the servers goes down, the business is affected and cannot produce and consume messages. Program error:
WARN [Consumer clientId=consumer-1, groupId=console-consumer-55928] 1 partitions have leader brokers without a matching listener, including [baidd- ...
Posted by jrtaylor on Tue, 28 Sep 2021 13:55:51 -0700
Kafka(Go) tutorial -- source code analysis of sarama client producer
From: month https://www.lixueduan.com
Original text: https://www.lixueduan.com/post/kafka/06-sarama-producer/
This paper mainly analyzes the implementation principle of Kafka Go sarama client producer through the source code, including the specific steps of message distribution process, message packaging processing, and finally sending to Kaf ...
Posted by daz1034 on Mon, 20 Sep 2021 10:19:13 -0700
Spring cloud: stream message driven
springcloud
Stream message driven
Message driven overview
What is Spring Cloud Stream: officially, Spring Cloud Stream is a framework for building message driven microservices.
The application interacts with the binder object in Spring Cloud Stream through inputs or outputs.We configure binding, and the binder object of Spring Clou ...
Posted by grail on Fri, 10 Sep 2021 18:05:11 -0700
FlinkSQL custom function (UDF) dimension table conversion
preface
Relationship between Table and SQL: SQL is the inheritance encapsulation of Table (this point is in Flink concept For example, StreamTableEnvironment can be embodied by inheriting from TableEnvironment. Therefore, the use of tables in official documents can be reflected in SQL, such as user-defined functions, Table API & custom fun ...
Posted by scofansnags on Tue, 07 Sep 2021 19:38:53 -0700
Kakfa Flink hive integration principle and actual combat code
Hello, I'm brother Tu.
At present, he works as a big data algorithm engineer in a large Internet factory.
Today, a fan sent a message in the group asking if there is any information about Flink DDL, saying that he is a new learner.
In order to let this fan quickly learn how to use Flink DDL, I will explain the principle and attach the actu ...
Posted by amazinggrace1983 on Tue, 07 Sep 2021 16:02:28 -0700
Source code analysis of Kafka producer and consumer partition strategy
When I read other blogs before, I found that there are two views in the defect analysis of kafka consumer's RoundRobin. One is that if the themes of consumer consumption in the consumer group are different or the number of consumer threads is different, the number of consumer consumption partitions will be inclined; Another view is that the dis ...
Posted by Joeker on Thu, 02 Sep 2021 15:18:30 -0700
[cluster building topic] - Kafka cluster building
Kafka -- cluster building
Service discovery: first of all, the mutual discovery of services between kfakas is realized by using Zookeeper. Therefore, Zookeeper must be built in advance. To build a Zookeeper cluster, please pay attention to the blog's article: Building Zookeeper cluster;Service relat ...
Posted by Bauer418 on Mon, 29 Jun 2020 20:50:55 -0700
Big data Hadoop cluster construction
Big data Hadoop cluster construction
1, Environment
Server configuration:
CPU model: Intel ® Xeon ® CPU E5-2620 v4 @ 2.10GHz
CPU cores: 16
Memory: 64GB
operating system
Version: CentOS Linux release 7.5.1804 (Core)
Host list:
IP
host name
192.168.1.101
node1
192.168.1.102
node2
1 ...
Posted by SL-Cowsrule on Sun, 21 Jun 2020 17:57:54 -0700
Flink flow processing API code details, including a variety of Source, Transform, Sink cases, Flink learn
Hello, everyone, I am later, I will share my learning and work in the drips, hope that I have an opportunity to help you some articles, all articles will be launched in the official account, welcome to my official account "later X big data", thank you for your support and recognition.
It's another week without change. I went back to Y ...
Posted by GamingWarrior on Tue, 16 Jun 2020 20:35:17 -0700
spark structured streams, creating streaming DataFrame s and streaming Datasets
Create Streaming DataFrame and Streaming Datasets
Streaming DataFrames are available through SparkSession.readStream() The returned DataStreamReader interface (Scala / Java / Python document) is created.
Input Sources
Common built-in Sources
File source: Reads files from a specified directory as st ...
Posted by IwnfuM on Sun, 07 Jun 2020 18:13:27 -0700