Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy stand-alone kafka with docker and docker-compose

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the knowledge of "how to use docker and docker-compose to deploy stand-alone kafka". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Premise

Docker

Docker-compose

Among them, docker-compose is not necessary, and it is possible to use docker alone. Here we mainly introduce docker and docker-compose.

Docker deployment

Docker deploying kafka is very simple, and it takes only two commands to deploy the kafka server.

Docker run-d-- name zookeeper-p 2181 wurstmeister/zookeeperdocker run-d-- name kafka-p 9092 wurstmeister/zookeeperdocker run-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181-- link zookeeper-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.1.60 (machine IP): 9092-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092-t wurstmeister/kafka

Because kafka needs to work with zookeeper, you need to deploy a zookeeper, but with docker it is very easy to deploy.

You can view the status of the two containers through docker ps, which is no longer shown here.

Next, producers and consumers can try.

Production and consumption message testing through kafka built-in tools

First, go to the docker container of kafka

Docker exec-it kafka sh

Run consumers to monitor messages

Kafka-console-consumer.sh-- bootstrap-server 192.168.1.60 topic kafeidou 9094-- from-beginning

Open a new ssh window, also enter the kafka container, and execute the following command to produce the message

Kafka-console-producer.sh-broker-list 192.168.1.60 (machine IP): 9092-topic kafeidou

After entering this command, you will go to the console. You can enter any message you want to send. Send a hello here.

> > hello >

As you can see, after entering the message in the producer's console, the consumer's console immediately sees the message.

So far, a complete hello world of kafka has completed the deployment of .kafka plus producer-consumer testing.

Testing through java code

Create a new maven project and add the following dependencies

Org.apache.kafka kafka-clients 2.1.1 org.apache.kafka kafka_2.11 0.11.0.2

Producer code

Producer.java

Import org.apache.kafka.clients.producer.*;import java.util.Date;import java.util.Properties;import java.util.Random;public class HelloWorldProducer {public static void main (String [] args) {long events = 30; Random rnd = new Random (); Properties props = new Properties (); props.put ("bootstrap.servers", "192.168.1.60 bootstrap.servers"); props.put ("acks", "all") Props.put ("retries", 0); props.put ("batch.size", 16384); props.put ("linger.ms", 1); props.put ("buffer.memory", 33554432); props.put ("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put ("value.serializer", "org.apache.kafka.common.serialization.StringSerializer") Props.put ("message.timeout.ms", "3000"); Producer producer = new KafkaProducer (props); String topic = "kafeidou"; for (long nEvents = 0; nEvents < events; nEvents++) {long runtime = new Date (). GetTime (); String ip = "192.168.2." + rnd.nextInt (3000); String msg = runtime + ", www.example.com," + ip; System.out.println (msg) ProducerRecord data = new ProducerRecord (topic, ip, msg); producer.send (data, new Callback () {public void onCompletion (RecordMetadata metadata, Exception e) {if (e! = null) {e.printStackTrace ();} else {System.out.println ("The offset of the record we just sent is:" + metadata.offset ()) });} System.out.println ("send message done"); producer.close (); System.exit (- 1);}}

Consumer code

Consumer.java

Import java.util.Arrays;import java.util.Properties;import org.apache.kafka.clients.consumer.Consumer;import org.apache.kafka.clients.consumer.ConsumerConfig;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.apache.kafka.clients.consumer.ConsumerRecords;import org.apache.kafka.clients.consumer.KafkaConsumer;import org.apache.kafka.common.serialization.StringDeserializer;public class HelloWorldConsumer2 {public static void main (String [] args) {Properties props = new Properties () Props.put (ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.60 kafeidou_group"); props.put (ConsumerConfig.GROUP_ID_CONFIG, "kafeidou_group"); props.put (ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); props.put (ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); props.put (ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class) Props.put (ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); props.put ("auto.offset.reset", "earliest"); Consumer consumer = new KafkaConsumer (props); consumer.subscribe (Arrays.asList ("kafeidou")); while (true) {ConsumerRecords records = consumer.poll (1000) For (ConsumerRecord record: records) {System.out.printf ("offset =% d, key =% s, value =% s% n", record.offset (), record.key (), record.value ());}

You can run producers and consumers separately.

Producer print message

1581651496176,www.example.com,192.168.2.2191581651497299,www.example.com,192.168.2.1121581651497299,www.example.com,192.168.2.20

Consumers print messages

Offset = 0, key = 192.168.2.202, value = 1581645295298, value = 1581645295298, key = 192.168.202 offset = 1, key = 192.168.2.102, value = 1581645295848province www.example.commer 192.168.2.102offset = 2, key = 192.168.2.63, value = 1581645295848

Source code address: FISHStack/kafka-demo

Deploy kafka through docker-compose

First create a docker-compose.yml file

Version: '3.7'services: zookeeper: image: wurstmeister/zookeeper volumes: -. / data:/data ports:-2182 kafka9094: image: wurstmeister/kafka ports:-9092 environment: KAFKA_BROKER_ID: 0 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.60:9092 KAFKA_CREATE_TOPICS: "kafeidou:2:0" # After kafka starts, initialize a topic KAFKA_ZOOKEEPER_CONNECT with 2 partition (partitions) 0 copies called kafeidou: zookeeper:2181 KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092 volumes: -. / kafka-logs:/kafka depends_on:-zookeeper

It's easy to deploy, just execute docker-compose up-d in the directory of the docker-compose.yml file, and test it in the same way as above.

This docker-compose does a little more than what is deployed in docker above.

Data persistence. The data of zookeeper and kafka are stored in two directories under the current directory. Of course, adding the-v option to the docker run command can also achieve this effect.

Kafka initializes a topic with partitions after startup. Similarly, it is possible to add-e KAFKA_CREATE_TOPICS=kafeidou:2:0 to docker run.

Summary: docker-compose deployment is preferred

Why?

Because simply using docker deployment, if there is any change (for example, changing the port number that is open to the public), docker needs to stop the container docker stop container ID/ container NAME, then delete the container docker rm container ID/ container NAME, and finally start the container docker run of the new effect.

If you change the content in the case of docker-compose deployment, you only need to modify the corresponding place of the docker-compose.yml file, for example, change 2181 to 2182, and then execute docker-compose up-d in the directory corresponding to the docker-compose.yml file again to achieve the updated effect.

This is the end of the content of "how to deploy stand-alone kafka with docker and docker-compose". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report