In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the knowledge of "how to use Spring Boot Kafka". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Kafka cluster installation, configuration and startup
Kafka needs to rely on zookeeper, and it needs at least 3 nodes integrated with zookeeper,zookeeper to ensure high availability of the cluster. The following is to create a pseudo-cluster mode of kafka3 nodes under stand-alone linux.
1. Download the package
Download address: http://kafka.apache.org/downloads
2. Decompress the package
Tar-zxvf kafka_2.11-1.0.0.tgz\ mv kafka_2.11-1.0.0 kafka1\ mv kafka_2.11-1.0.0 kafka2\ mv kafka_2.11-1.0.0 kafka3
3. Create a ZK cluster
Modify the ZK configuration file: kafka1-3/config/zookeeper.properties modifies the corresponding parameters.
DataDir=/usr/local/kafka/zookeeper1 dataLogDir=/usr/local/kafka/zookeeper/log clientPort=2181 maxClientCnxns=0 tickTime=2000 initLimit=100 syncLimit=5 server.1=127.0.0.1:2888:3888 server.2=127.0.0.1:4888:5888 server.3=127.0.0.1:6888:7888
Create myid files under the / usr/local/kafka/zookeeper1-3 directory, respectively, with contents corresponding to 1x3
Start ZK and go to the Kafka1-3 directory:
Bin/zookeeper-server-start.sh config/zookeeper.properties &
Failed to start the report file. You need to manually create the file directory and grant the corresponding permissions.
4. Create a Kafka cluster
Configuration file: kafka1-3/config/server.properties modifies the corresponding parameters respectively.
Broker.id=1 zookeeper.connect=localhost:2181,localhost:2182,localhost:2183 listeners=PLAINTEXT://192.168.12.11:9091 log.dirs=/tmp/kafka-logs-1
Start Kafka and go to the Kafka1-3 directory:
Bin/kafka-server-start.sh config/server.properties &
Failed to start the report file. You need to manually create the file directory and grant the corresponding permissions.
5. Cluster testing
Send a message on kafka1:
Bin/kafka-console-producer.sh-broker-list localhost:9091-topic test
Consume messages in kafka2 and kafka3:
Bin/kafka-console-consumer.sh-zookeeper localhost:2181-from-beginning-topic my-replicated-topicSpring Boot Integrated Kafka practice
1. Add spring-kafka dependencies
2.1.0.RELEASE org.springframework.kafka spring-kafka ${spring-kafka.version}
2. Add automatic configuration of Spring Boot
Automatic configuration class:
Org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration
Configure the property class:
Org.springframework.boot.autoconfigure.kafka.KafkaProperties
Spring: kafka: bootstrap-servers:-192.168.101.137Spring 9091-192.168.101.137bootstrap-servers 9092-192.168.101.137bootstrap-servers 9093 producer: retries: 0 batch-size: 16384 buffer-memory: 33554432 key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer consumer: Group-id: foo auto-offset-reset: earliest enable-auto-commit: true auto-commit-interval: 100 key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
3. Send messages
@ Autowired private KafkaTemplate kafkaTemplate; @ GetMapping ("/ send") public Object send (String msg) {kafkaTemplate.send ("test", "name", msg); return "send ok";}
4. Receive messages
In any bean, add @ KafkaListener to support message reception.
@ KafkaListener (topics = "test") public void processMessage (String content) {logger.info ("received message, topic:test, msg: {}", content);} that's all for "how to use Spring Boot Kafka". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.