Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Apache-kafka cluster deployment

2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Premise:zk

What is kafka?

Baidu Encyclopedia

Kafka is a distributed, partitioned, multi-replica log submission service. It provides the functionality of a messaging system through a unique design.

The goal is to provide a unified, high-throughput, low-latency platform for processing real-time data.

Kafka is a distributed streaming media platform.

mounting configuration

host list

hostnameipmaster192.168.3.58slave1192.168.3.54slave2192.168.3.31

Download kafka

cd /data

wget http://mirrors.hust.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz

decompression

tar axf kafka_2.11-1.0.0.tgz

Create Log Storage Directory

mkdir -p /data/kafka_2.11-1.0.0/logs

Setting environment variables (every node changes)

vim /etc/profile

#kafkaexport KAFKA_HOME=/data/kafka_2.11-1.0.0export PATH=$PATH:${KAFKA_HOME}/bin

source /etc/profile

modify the configuration file

cd kafka_2.11-1.0.0/config/

Modify zookeeper. properties (overwrite zookeeper's configuration file zoo.conf directly)

grep -v "^#" zookeeper.properties

dataDir=/data/zookeeper/datadataLogDir=/data/zookeeper/logsclientPort=2181tickTime=2000initLimit=10syncLimit=5server.1=master:2887:3887 server.2=slave1:2887:3887server.3=slave2:2887:3887

vim server.properties

#ID number, unique in the cluster, generally using the last digit of the local ip broker.id=58#"Allow to delete topic, default is false"delete.topic.enable=true#"Server monitoring address, if not set, it will be obtained through java. net.InetAddress.getCanonicalHostName(), port default is 9092"listeners=PLAINTEXT://master: 9092#"Number of network connection processes, appropriately increase"num. network.threads=3#"Number of io connections, Increase "num.io.threads=8#"socket send buffer size "socket.send.buffer.bytes=102400#"socket receive buffer size "socket.receive.buffer.bytes=102400#"socket response buffer size "socket.request.max.bytes=104857600#" log file storage path "log.dirs=/data/kafka_2.11-1.0.0/logs#" Number of copies, allowing larger parallel consumption, cannot be greater than node data "num.partitions=2#" Number of threads per directory "num.recovery.threads.per.data.dir=1#" Number of topic metadata copies, Recommended> 1"offsets.topic.replication.factor=3transaction.state.log.replication.factor=3transaction.state.log.min.isr=3#" log cleanup method "log.cleanup.policy = delete#" log retention time "log.retention.hours=30#" log maximum " log.segment.bytes=1073741824#"Check log time" log.retention.check.interval.ms = 30000#"zookeeper connection"zookeeper.connect=master:2181,slave1:2181,slave2:2181#"zookeeper connection timeout" zookeeper.connection.timeout.ms =6000#"Sync delay" group.initial.rebalance.delay.ms =0

scp files to two other machines

scp -r /data/kafka_2.11-1.0.0 slave1:/data

scp -r /data/kafka_2.11-1.0.0 slave2:/data

slave1 and slave2 modify server.properties

modify its

broker.id

listeners

Slave1

broker.id=54listeners=PLAINTEXT://slave1:9092

Start kafka (per node)(takes up windows)

kafka-server-start.sh /data/kafka_2.11-1.0.0/config/server.properties

Background launch kafka (used in production environments)

nohup kafka-server-start.sh /data/kafka_2.11-1.0.0/config/server.properties 2&> /data/kafka_2.11-1.0.0/logs/kafka.log &

Close kafka

kafka-server-stop.sh

create a topic

kafka-topics.sh --create --zookeeper master:2181,slave1:2181,slave2:2181 --partitions 3 --replication-factor 3 --topic test

View topic

kafka-topics.sh --list --zookeeper master:2181,slave1:2181,slave2:2181

test

Creating a producer

kafka-console-producer.sh --broker-list master:9092,slave1:9092,slave2:9092 --topic producerest

Create consumer on another machine

kafka-console-consumer.sh --zookeeper master:2181,slave1:2181,slave2:2181 --topic producerest --from-beginning

consumer, consumer

Use Ctrl+C to exit

reference

Detailed configuration (official document): kafka.apache.org/documentation/#configuration

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 236

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report