In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Today, I will talk to you about how to build the local environment of kafka under Docker. Many people may not know much about it. In order to make you understand better, the editor has summarized the following contents for you. I hope you can get something from this article.
We learn how to write these scripts and build a local kafka environment through hands-on.
This practice will create a docker image. The entire environment involves multiple containers. Let's list them all first, and then sort out the relationship between them, as shown below:
Kafka sever provides message services; the role of message producer is to generate messages that execute topics; the role of message consumer is to subscribe to messages on specified topics and consume them.
# zookeeper### zookeeper uses a stand-alone version, so there is nothing to customize, so you can directly use the official image, daocloud.io/library/zookeeper:3.3.6
# kafka sever### searches kafka on hub.docker.com and does not see an image of the official logo. Make one by yourself. Prepare two materials before writing Dockerfile: the kafka installation package and the shell script that starts kafka.
The kafka installation package uses version 2.9.2-0.8.1. In git@github.com:zq2599/docker_kafka.git, let clone obtain
The content of the shell script to start kafka server is as follows. Simply execute the script in the bin directory of kafka to start server:
#! / bin/bash$WORK_PATH/$KAFKA_PACKAGE_NAME/bin/kafka-server-start.sh $WORK_PATH/$KAFKA_PACKAGE_NAME/config/server.properties
Next, you can write Dockerfile, as follows:
# Docker image of kafka# VERSION 0.0.percent Author: the bolingcavalry# basic image uses tomcat This avoids setting up the java environment FROM daocloud.io/library/tomcat:7.0.77-jre8# author MAINTAINER BolingCavalry # definition working directory ENV WORK_PATH/ usr/local/work# definition kafka folder name ENV KAFKA_PACKAGE_NAME kafka_2.9.2-0.8.creating a working directory RUN mkdir-p $WORK_PATH# copying the shell that starts server to the working directory COPY. / start_server.sh $WORK_PATH/# to kafka Copy the compressed file to the working directory COPY. / $KAFKA_PACKAGE_NAME.tgz $WORK_PATH/# extract RUN tar-xvf $WORK_PATH/$KAFKA_PACKAGE_NAME.tgz-C $WORK_PATH/# delete the compressed file RUN rm $WORK_PATH/$KAFKA_PACKAGE_NAME.tgz# execute the sed command to modify the file Change the ip that connects the zk to the alias RUN sed-I's zookeeper container corresponding to the link parameter zookeeper.connectroomlocalhostVere 2181 shell zookeeper.connectroomzkhostv2181 WORK_PATH/$KAFKA_PACKAGE_NAME/config/server.properties# gives the shell execution authority RUN chmod aplx $WORK_PATH/start_server.sh
As shown in the script, the operation is not complicated. Copy and extract the kafka installation package, start the shell script, and change the ip of zookeeper in the configuration file to the alias of zookeeper when link
After Dockerfile is written, put it in the same directory as kafka_2.9.2-0.8.1.tgz and start_server.sh, and execute it in this directory using the console:
Docker build-t bolingcavalry/kafka:0.0.1.
After the image is successfully built, create a new directory and write a docker-compose.yml script, as follows:
Version: '2'services: zk_server: image: daocloud.io/library/zookeeper:3.3.6 restart: always kafka_server: image: bolingcavalry/kafka:0.0.1 links:-zk_server:zkhost command: / bin/sh-c' / usr/local/work/start_server.sh' restart: always message_producer: image: bolingcavalry/kafka:0.0.1 links:-zk_ Server:zkhost-kafka_server:kafkahost restart: always message_consumer: image: bolingcavalry/kafka:0.0.1 links:-zk_server:zkhost restart: always
Four containers are configured in docker-compose.yml:
Zookeeper is official.
The other three are all mirrored using the bolingcavalry/kafka just made.
Kafka_server executes the start_server.sh script at startup to start the service.
Both message_producer and message_consumer simply install the kafka environment to send or subscribe messages from the command line, but the containers themselves do not start server
The kafka_server,message_producer,message_consumer is connected to the zookeeper container through the link parameter, and the message_producer is also connected to the kafka server, because the ip address of the kafka server is used when sending the message
Now open the terminal and execute docker-compose up-d under the directory where docker-compose.yml is located to start all containers.
After reading the above, do you have any further understanding of how to build the local environment of kafka under Docker? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.