Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to integrate Kafka components in SpringBoot2

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the relevant knowledge of how to integrate Kafka components in SpringBoot2, the content is detailed and easy to understand, the operation is simple and fast, and has a certain reference value, I believe you will have something to gain after reading this SpringBoot2 article on how to integrate Kafka components, let's take a look.

1. Set up Kafka environment 1, download and decompress-download wget http://mirror.bit.edu.cn/apache/kafka/2.2.0/kafka_2.11-2.2.0.tgz--, extract tar-zxvf kafka_2.11-2.2.0.tgzmuri-rename mv kafka_2.11-2.2.0 kafka2.112, start the Kafka service

Kafka relies on the ZooKeeper service and requires a local installation and startup of ZooKeeper.

Reference article: Linux system to build ZooKeeper3.4 middleware, common command summary

-- execution location-- / usr/local/mysoft/kafka2.11bin/kafka-server-start.sh config/server.properties3, view service ps-aux | grep kafka4, open address port-- basic path-- / usr/local/mysoft/kafka2.11/configvim server.properties-- add the following notes advertised.listeners=PLAINTEXT://192.168.72.130: 90922, Kafka basic concept 1, basic description

Kafka is open source by Apache, with distributed, partitioned, multi-replica, multi-subscriber, distributed processing platform based on Zookeeper coordination, written by Scala and Java languages. It is usually used to collect the action log data generated by users in the application service and process it at a high speed. Log data requires high throughput performance requirements, which is a feasible solution for log data and offline analysis systems like Hadoop, but also requires real-time processing limitations. The purpose of Kafka is to unify online and offline message processing through Hadoop's parallel loading mechanism, as well as to provide real-time messages through clusters.

2. Functional features

(1) the persistence of the message is provided through the disk data structure, and the message storage can be maintained for a long time.

(2) High throughput, even very ordinary hardware Kafka can support ultra-high concurrency per second.

(3) support partitioning messages through Kafka servers and consumer machine clusters

(4) support Hadoop parallel data loading

(5) the API package is very well packaged, easy to use and quick to use.

(6) distributed message queue. Kafka classifies messages according to Topic when they are saved. The sender is called Producer, and the message receiver is called Consumer.

3. Message function

Point-to-point mode

The peer-to-peer model is usually a message delivery model based on pull or polling. Consumers actively pull the data and remove the message from the queue after receiving the message. This model does not push the message to the client, but requests the message from the queue. It is characterized in that messages sent to the queue are received and processed by one and only one consumer, even if there are multiple consumer monitoring queues.

Publish and subscribe model

The publish-subscribe model is a push-based messaging model that, after the message is generated, is pushed to all subscribers. The publish-subscribe model can have many different subscribers, temporary subscribers receive messages only when actively listening to topics, while persistent subscribers listen for all messages on topics, even if the current subscriber is unavailable and offline.

4. Message queuing function

The program is decoupled, the producer and consumer are independent and executed asynchronously.

Message data is persisted until it is fully consumed, avoiding the risk of data loss.

Peak traffic, use message queues to bear access pressure and avoid program avalanches as much as possible

Reduce the coupling between processes, when some components of the system collapse, it will not affect the whole system.

Ensure the sequential execution of messages to solve the business requirements of specific scenarios

5. Brief introduction of professional terms

Broker

A kafka server is a broker. A cluster consists of multiple broker. A broker can hold multiple topic.

Producer

The message producer is the client that sends messages to kafka broker.

Consumer

Message consumer, the client that fetches the message from kafka broker.

Topic

Every message published to the Kafka cluster has a category called Topic, which can be understood as a queue.

Consumer Group

Each Consumer belongs to a specific Consumer Group, and you can specify a group name for each Consumer, and if you do not specify group name, it belongs to the default grouping.

Partition

A large topic can be distributed over multiple broker, a topic can be divided into multiple partition, and each partition is an ordered queue. Each message in the partition is assigned an ordered id. Kafka only guarantees that messages are sent to consumer in the order of a partition, not the overall order of a topic. Partition is a physical concept that facilitates expansion in clusters and improves concurrency.

Third, integrate the SpringBoot2 framework 1. Case structure

Message producer: kafka-producer-server

Message consumer: kafka-consumer-server

2. Basic dependency org.springframework.boot spring-boot-starter-web org.springframework.kafka spring-kafka 2.2.4.RELEASE3, producer configuration spring: kafka: bootstrap-servers: 127.0.0.1 bootstrap-servers 90924, message generation @ RestControllerpublic class ProducerWeb {@ Resource private KafkaTemplate kafkaTemplate @ RequestMapping ("/ send") public String sendMsg () {MsgLog msgLog = new MsgLog (1, "message generation", 1, "message log", new Date ()); String msg = JSON.toJSONString (msgLog); / / if Topic does not exist, kafkaTemplate.send ("cicada-topic", msg) is automatically created; return msg 5. Consumer configuration spring: kafka: bootstrap-servers: 127.0.0.1 consumer: group-id: test-consumer-group6, message consumption @ Componentpublic class ConsumerMsg {private static Logger LOGGER = LoggerFactory.getLogger (ConsumerMsg.class); @ KafkaListener (topics = "cicada-topic") public void listenMsg (ConsumerRecord

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report