Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build Kafka+SpringBoot distributed message system from scratch

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

In this issue, the editor will bring you about how to build a Kafka+SpringBoot distributed messaging system from scratch. The article is rich in content and analyzes and describes for you from a professional point of view. I hope you can get something after reading this article.

Preface

Since kafka is strongly dependent on zookeeper, it is necessary to build a zookeeper cluster first. Since zookeeper is written by java and needs to run on jvm, you should have a java environment first. (ps: by default, your centos system can be connected to the Internet, so this tutorial will not teach you how to configure ip.) (ps2: install without wget: yum install wget) (ps3: people, just be organized. Put a little in the east and a little in the west, and you won't know where you put it after a while. All downloads of this tutorial are in the / usr/local directory. (ps4:kafka may have built-in zookeeper, which feels like you can get past the zookeeper tutorial, but it's also configured here. I haven't tried)

1. Configure jdk

Because oracle does not allow downloading jdk packages on the official website directly through wget. So what you download directly from wget below is a web file with only 5k, not the jdk package you need. Monopoly is capriciousness. (please use java-version to determine whether to bring your own jdk, mine does not.)

1. Download from the official website

The following is the official download address of jdk8:

Https://www.oracle.com/technetwork/java/javase/downloads/java-archive-javase8u211-later-5573849.html

2. Upload and decompress

Here, upload to the specified location of the server via xftp: / usr/local

Run the command to make the environment effective

Source / etc/profile

Wait for the download to complete and unzip it:

Tar-zxvf zookeeper-3.4.6.tar.gz

从零开始搭建Kafka+SpringBoot分布式消息系统

Renamed to zookeeper1

Mv zookeeper-3.4.6 zookeeper1cp-r zookeeper1 zookeeper2cp-r zookeeper1 zookeeper32, create data, logs folder

Create it under the zookeeper1 directory

Create a new myid file in the data directory. Content is 1

3. Modify the zoo.cfg file cd / usr/local/zookeeper/zookeeper1/conf/cp zoo_sample.cfg zoo.cfg

After the above two steps, you have the zoo.cfg file, which is now modified as follows:

从零开始搭建Kafka+SpringBoot分布式消息系统

DataDir=/usr/local/zookeeper/zookeeper1/datadataLogDir=/usr/local/zookeeper/zookeeper1/logsserver.1=192.168.233.11:2888:3888server.2=192.168.233.11:2889:3889server.3=192.168.233.11:2890:38904, build zookeeper2

First, copy and rename.

Cd / usr/local/zookeeper/cp-r zookeeper1 zookeeper2

Then modify some specific configurations:

Vim zookeeper2/conf/zoo.cfg

Change the following three places 1 to 2

Vim zookeeper2/data/myid

At the same time, change the value in myid to 2

Vim zookeeper3/conf/zoo.cfg

Modified to 3

6. Test zookeeper cluster cd / usr/local/zookeeper/zookeeper1/bin/

Since it takes a lot of code to start, a simple startup script is written here:

Vim start

The content of start is as follows

Cd / usr/local/zookeeper/zookeeper1/bin/./zkServer.sh start.. / conf/zoo.cfgcd / usr/local/zookeeper/zookeeper2/bin/./zkServer.sh start.. / conf/zoo.cfgcd / usr/local/zookeeper/zookeeper3/bin/./zkServer.sh start.. / conf/zoo.cfg

Here is the connection script:

Vim login

The login content is as follows:

. / zkCli.sh-server 192.168.233.11 Vol 2181192.168.233.11 Rd 2182192.168.233.11 Rd 2183

After the scripting is complete, start:

Sh startsh login

The cluster was started successfully, as shown below:

3. Set up kafka cluster 1 and download kafka

First create the kafka directory:

Mkdir / usr/local/kafka

Then download it from this directory

Cd / usr/local/kafka/wget https://archive.apache.org/dist/kafka/1.1.0/kafka_2.11-1.1.0.tgz

After the download is successful, decompress:

Tar-zxvf kafka_2.11-1.1.0.tgz2, modify cluster configuration

First, go to the conf directory:

Cd / usr/local/kafka/kafka_2.11-1.1.0/config

Modify server.properties modify content:

Broker.id=0log.dirs=/tmp/kafka-logslisteners=PLAINTEXT://192.168.233.11:9092

Make two copies of server.properties

Cp server.properties server2.propertiescp server.properties server3.properties

Modify server2.properties

Vim server2.properties

The main contents of the modification are as follows:

Broker.id=1log.dirs=/tmp/kafka-logs1listeners=PLAINTEXT://192.168.233.11:9093

As above, modify the server3.properties and modify the content as follows:

Broker.id=2log.dirs=/tmp/kafka-logs2listeners=PLAINTEXT://192.168.233.11:90943, start kafka

Here again, write a script in the bin directory:

Cd.. / bin/vim start

The content of the script is:

. / kafka-server-start.sh.. / config/server.properties &. / kafka-server-start.sh.. / config/server2.properties &. / kafka-server-start.sh.. / config/server3.properties &

As you can see from the jps command, a total of three kafka have been started.

从零开始搭建Kafka+SpringBoot分布式消息系统

4. Create Topiccd / usr/local/kafka/kafka_2.11-1.1.0bin/kafka-topics.sh-- create-- zookeeper localhost:2181-- replication-factor 3-- partitions 1-- topic my-replicated-topic

Kafka printed several logs

View kafka status

Bin/kafka-topics.sh-describe-zookeeper localhost:2181-topic my-replicated-topic

6. Start consumer bin/kafka-console-consumer.sh-bootstrap-server localhost:9092-from-beginning-topic my-replicated-topic

It can be seen that consumers will spend automatically after starting up.

Consumers automatically capture successfully!

If you are not satisfied, you will throw an exception when you start springboot! Ps: I've taken all the forks I should take (╥ crossing ╥) o (my kafka-clients is 1.1.0, springlykafka is 2.2.2, and the middle column is off the hook for the time being)

从零开始搭建Kafka+SpringBoot分布式消息系统

After getting back to business for two hours, I finally got it done and wanted to cry. The basic problem encountered is that the jar version does not match. I will modify the above steps accordingly, and strive for everyone to go through it in accordance with this tutorial!

1 、 Pom file 4.0.0 org.springframework.boot spring-boot-starter-parent 2.1.1.RELEASE com.gzky study 0.0.1-SNAPSHOT study Demo project for Spring Boot 1.8 org.springframework.boot spring-boot-starter-web Org.springframework.boot spring-boot-starter-test test org.junit.vintage junit-vintage-engine org.springframework.boot spring-boot-starter-redis 1.3.8. RELEASE redis.clients jedis org.springframework.kafka spring-kafka 2.2.0.RELEASE org.apache.kafka kafka-clients Org.springframework.boot spring-boot-maven-plugin

In the pom file, the focus is on the following two versions.

Org.springframework.boot spring-boot-starter-parent 2.1.1.RELEASE org.springframework.kafka spring-kafka 2.2.0.RELEASE2, application.ymlspring: redis: cluster: # set the lifetime of key. When key expires, it will be deleted automatically. Expire-seconds: 120# set the execution time of the command. If it exceeds this time, an error will be reported. Command-timeout: 5000 # sets the node information of the redis cluster, where namenode is domain name resolution, and the corresponding address is obtained by resolving the domain name. Nodes: 192.168.233.11 nodes 9001192.168.233.11 nodes 9002192.168.233.11 kafka proxy address specified You can have multiple bootstrap-servers: 192.168.233.11 batch-size 9092192.168.233.11 bootstrap-servers 9093192.168.233.11 retries: 0 # number of messages sent in batches batch-size: 16384 buffer-memory: 33554432 # specify the encoding and decoding method of message key and message body key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org .apache.kafka.common.serialization.StringSerializer consumer: # specify the default consumer group id group-id: test-group auto-offset-reset: earliest enable-auto-commit: true auto-commit-interval: 100# specify the encoding and decoding method of message key and message body key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializerserver: Port: 8085 servlet: # context-path: / redis context-path: / kafka

If you do not configure Redis, you can delete the Redis, which is the following figure: for those who want to learn how to configure Redis clusters, please refer to "Building Redis Cluster redis-cluster and integrating springboot".

3. Producer package com.gzky.study.utils;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.kafka.core.KafkaTemplate;import org.springframework.stereotype.Component;/** * kafka producer tool class * * @ author biws * @ date 2019-12-17 * * / @ Componentpublic class KfkaProducer {private static Logger logger = LoggerFactory.getLogger (KfkaProducer.class); @ Autowired private KafkaTemplate kafkaTemplate / * * production data * @ param str specific data * / public void send (String str) {logger.info ("production data:" > 4) Consumer package com.gzky.study.utils;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.kafka.annotation.KafkaListener;import org.springframework.stereotype.Component / * kafka consumer monitoring message * * @ author biws * @ date 2019-12-17 * * / @ Componentpublic class KafkaConsumerListener {private static Logger logger = LoggerFactory.getLogger (KafkaConsumerListener.class); @ KafkaListener (topics = "testTopic") public void onMessage (String str) {/ / insert (str); / / here is the insert database code logger.info ("listening:" + str) System.out.println ("listening:" + str);}} 5. External interface package com.gzky.study.controller;import com.gzky.study.utils.KfkaProducer;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.web.bind.annotation.*;/** * kafka external interface * * @ author biws * @ date 2019-12-17 * * / @ RestControllerpublic class KafkaController {@ Autowired KfkaProducer kfkaProducer / * production message * @ param str * @ return * / @ RequestMapping (value = "/ sendKafkaWithTestTopic", method = RequestMethod.GET) @ ResponseBody public boolean sendTopic (@ RequestParam String str) {kfkaProducer.send (str); return true }} 6. Postman test. First of all, the listener (kafka root directory) should be started on the server. The following command must be the specific server ip, not localhost. It's the pit I stepped on: it's recommended to restart the cluster shutdown kafka command here: cd / usr/local/kafka/kafka_2.11-1.1.0/bin./kafka-server-stop.sh.. / config/server.properties &. / kafka-server-stop.sh.. / config/server2.properties &. / kafka-server-stop.sh. / config/server3.properties & here you should jps and wait for all kafka to be turned off (kill that can't be turned off) Restart kafka:./kafka-server-start.sh.. / config/server.properties &. / kafka-server-start.sh.. / config/server2.properties &. / kafka-server-start.sh.. / config/server3.properties & wait for kafka to start successfully Start the consumer listening port: cd / usr/local/kafka/kafka_2.11-1.1.0bin/kafka-console-consumer.sh-- bootstrap-server 192.168.233.11 bootstrap-server 9092-- from-beginning-- topic testTopic all the test information I have typed indiscriminately has been monitored! Start the springboot service and then use postman to produce messages: then enjoy the results, and the server-side monitoring is successful. The monitoring in the project is also successful! This is how to build a Kafka+SpringBoot distributed messaging system from scratch. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report