In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
How to install kafka in ubuntu16.04 system, many novices are not very clear about this, in order to help you solve this problem, the following small series will explain in detail for everyone, people who have this need can learn, I hope you can gain something.
download
wget http://mirror-hk.koddos.net/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz
installation
tar zxvf kafka_2.12-2.3.0.tgzcd kafka_2.12-2.3.0/vim config/server.properties
configured
#Common configuration #kafka data directory log. dirs =/data/kafka #zooeeperzookeeper. connect = kafka-node 1:2181, kafka-node 2:2181, kafka-node 3: 2181#Node Configuration #Node 1broker.id = 0#listeners = PLAINTEXT://: 9092 listeners = PLAINTEXT://www.example.com Node 2broker.id = 1#listeners = PLAINTEXT://: 9092 listeners = PLAINTEXT://www.example.com node 3broker.id = 2#listeners = PLAINTEXT://: 9092 listeners = PLAINTEXT://www.example.com
start
#Enter kafka root cd/app/kafka_2.12 - 2.3.0/#Start bin/www.example.com-daemon config/server.properties#Start successful output example (last lines)[2019 - 09 - 11 11:14: 13,403] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId: 0, blockStarterId: 0, blockEndProducerId: 999) by writing to Zk with path version 1 (kafka. coordinator. transaction. ProducerIdManager)[2019 - 09 - 11 11:14: 13,423] INFO [TransactionCoordinator id = 0] Startup. (kafka.coordinator.transaction.TransactionCoordinator)[2019-09-11 11:14:13,424] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)[2019-09-11 11:14:13,424] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)[2019-09-11 11:14:13,459] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)[2019-09-11 11:14:13,479] INFO [SocketServer brokerId=0] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)[2019-09-11 11:14:13,485] INFO Kafka version: 2.3.0 (org.apache.kafka.common.utils.AppInfoParser)[2019-09-11 11:14:13,485] INFO Kafka commitId: fc1aaa116b661c8a (org.apache.kafka.common.utils.AppInfoParser)[2019-09-11 11:14:13,485] INFO Kafka startTimeMs: 1568171653480 (org.apache.kafka.common.utils.AppInfoParser)[2019-09-11 11:14:13,487] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
use
1. Create Topic at kafka-node 1 (Broker) create test Tpoic: test-ken-io, where we specify 3 copies, 1 partition bin/www.example.com--create--bootstrap-server kafka-node 1:9092--replication-factor 3--partitions 1--topic test-ken-ioTopic created on kafka-node 1 is also synchronized to the other two brokers in the cluster: kafka-node 2, kafka-node 32, View Topic We can list bin/kafka-topics.sh--list--bootstrap-server kafka-node 1:90923. Send a message here to Broker Topic = test-ken-io of (id = 0) Send message bin/www.example.com--broker-list kafka-node 1:9092--topic test-ken-io #Message content> test by ken.io4,Consuming messages on kafka-node 2 Consuming messages from Broker 03 bin/www.example.com--bootstrap-server kafka-node 3:9092--topic test-ken-io--from-beginning Consuming Broker 02 's messages bin/kafka-console-consumer.sh--bootstrap-server kafka-node 2 on Kafka 03: kafka-topics.sh 1 message is consumed by only one Consumer at a time bin/kafka-console-consumer.sh--bootstrap-server kafka-node 3:9092--topic test-ken-io--from-beginning--group testgroup_kenbin/kafka-console-consumer.sh--bootstrap-server kafka-node 2:9092--topic test-ken-io--from-beginning--group testgroup_ken
parameters
Kafka Common Broker Configuration Description:
Configuration Item Default Value/Sample Value Description broker.id0Broker Unique ID listenersPLAINTEXT://192.168.88.53:9092 Listening information, PLAINTEXT indicates the data storage address of plaintext transmission log.dirskafka/logskafka, and multiple values can be filled in. Use "," interval message.max.bytesmessage.max.bytes Single message length limit in bytes num.partitions1 Default number of partitions log.flush.interval.messagesLong.MaxValueMaximum number of accumulated messages before data is written to the hard disk and available to consumers log.flush.interval.msLong.MaxValueMaximum time before data is written to the hard disk log.flush.scheduler.interval.msLong.MaxValueTime interval to check whether data is to be written to the hard disk. log.retention.hours24 Control a log retention time, unit: hours zookeeper. connect192.168.88.21: 2181 ZooKeeper server address, multiple stations with "," interval Is it helpful to read the above content? If you still want to have further understanding of related knowledge or read more related articles, please pay attention to the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.