There are many ways to install kafka:
1. Single-machine mode (divided into windows mode; linux mode);
2. Pseudo-distributed mode;
3. Distributed mode;
Specific building methods for reference: https://blog.csdn.net/xlgen157387/article/details/77312569?utm_source=blogxgwz0
The following is the stand-alone mode under Linux:
1. Installing zookeeper
Download zookeeper-3.4.9.tar;
Decompress tar-zxvf zookeeper-3.4.9.tar;
Enter the zookeeper-3.4.9/conf directory to create the zoo.cfg file, which reads as follows:
tickTime=2000 dataDir=/usr/myenv/zookeeper-3.4.8/data ##(Fill in your own data directory) dataLogDir=/usr/myenv/zookeeper-3.4.8/logs clientPort=2181
Start zookeeper:
./yourZookeeperDir/bin/zkServer.sh start
2. Installation of kafka
Download kafka: http://kafka.apache.org/downloadsï¼›
Decompress kafka: tar-zxvf kafka_2.10-0.8.2.1.tar
Modify the host configuration of zookeeper in the config/server.property configuration file. Since zookeeper is started locally, there is no need to modify:
Server. property configuration
Start kafka
./yourKafkaDir/bin/kafka-server-start.sh /yourKafkaDir/config/server.properties
3. kafka java Application demo
kafka Producer:
package kafkaTest; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import java.util.Properties; public class KafkaProducer { private final Producer<String,String>producer; public final static String TOPIC = "TEST-TOPIC"; public KafkaProducer() { Properties props = new Properties(); //The port of kafka is configured here. It is used in cluster, separated by commas. props.put("metadata.broker.list","192.168.1.103:9092"); // Configure the serialized class of value props.put("serializer.class","kafka.serializer.StringEncoder"); // Configuring the serialized class of key props.put("key.serializer.class", "kafka.serializer.StringEncoder"); props.put("request.required.acks","-1"); producer = new Producer<String, String>(new ProducerConfig(props)); } public void produce(){ int messageNo = 1000; final int COUNT = 10000; while (messageNo < COUNT){ String key = String.valueOf(messageNo); String data = "@@@@@hello kafka message"+key; producer.send(new KeyedMessage<String, String>(TOPIC,key,data)); System.out.println(data); messageNo++; } } public static void main(String[] args) { new KafkaProducer().produce(); } }
kafka consumer:
package kafkaTest; import kafka.consumer.Consumer; import kafka.consumer.ConsumerConfig; import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; import kafka.serializer.StringDecoder; import kafka.utils.VerifiableProperties; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; public class KafkaConsumer { private final ConsumerConnector consumer; public KafkaConsumer() { Properties props = new Properties(); // zookeeper configuration props.put("zookeeper.connect","127.0.0.1:2181"); // Group represents a consumer group props.put("group.id","test-group"); // zk connection timeout props.put("zookeeper.session.timeout.ms", "4000"); props.put("zookeeper.sync.time.ms", "200"); props.put("auto.commit.interval.ms", "1000"); props.put("auto.offset.reset", "smallest"); // serialization class props.put("serializer.class", "kafka.serializer.StringEncoder"); consumer = Consumer.createJavaConsumerConnector(new ConsumerConfig(props)); } public void consume(){ Map<String,Integer> topicCountMap = new HashMap<String,Integer>(); topicCountMap.put(KafkaProducer.TOPIC,new Integer(1)); StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties()); StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties()); Map<String,List<KafkaStream<String,String>>> consumerMap = consumer.createMessageStreams(topicCountMap,keyDecoder,valueDecoder); KafkaStream<String,String> stream = consumerMap.get(KafkaProducer.TOPIC).get(0); ConsumerIterator<String,String> it = stream.iterator(); while (it.hasNext()){ System.out.println(it.next().message()); } } public static void main(String[] args) { new KafkaConsumer().consume(); } }
Reference article: https://www.jianshu.com/p/0e378e51b442
https://blog.csdn.net/xlgen157387/article/details/77312569?utm_source=blogxgwz0
https://blog.csdn.net/Evankaka/article/details/52494412?utm_source=blogxgwz0
https://blog.csdn.net/evankaka/article/details/52421314