In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Kafka-manager deployment
I. concept
The concept of Baidu can be understood according to the relevant materials.
1.1 Kafka is a high-throughput distributed publish and subscribe messaging system that can handle all action flow data in consumer-scale websites.
Broker
A Kafka cluster consists of one or more servers, which are called broker.
Topic
Every message posted to the Kafka cluster has a category, which is called Topic. (physically, messages with different Topic are stored separately. Logically, messages from a Topic are stored on one or more broker, but users only need to specify the Topic of the message to produce or consume data regardless of where the data is stored.)
Partition
Partition is a physical concept. Each Topic contains one or more Partition.
Producer
Responsible for releasing messages to Kafka broker
Consumer
Message consumer, the client that reads the message to Kafka broker.
Consumer Group
Each Consumer belongs to a specific Consumer Group (you can specify a group name for each Consumer, or the default group if you do not specify group name).
ZooKeeper is a distributed, open source distributed application coordination service, an open source implementation of Google's Chubby, and an important component of Hadoop and Hbase. It is a software that provides consistency services for distributed applications, including configuration maintenance, domain name services, distributed synchronization, group services and so on.
The basic operation flow of ZooKeeper:
1. Elect Leader.
2. Synchronize data.
3. There are many algorithms in the process of electing Leader, but the election criteria are the same.
4. Leader should have the highest execution ID, similar to root permission.
5. Most of the machines in the cluster get the response and follow selects the Leader.
1.3 kafka-manager to simplify the work of developers and service engineers in maintaining Kafka clusters, yahoo built a Web-based tool called Kafka Manager called Kafka Manager. This management tool can easily find out which topic are unevenly distributed in the cluster, or where partitions are unevenly distributed throughout the cluster. It supports managing multiple clusters, selecting replicas, reassigning replicas, and creating Topic. At the same time, this management tool is also a very good tool for quickly browsing the cluster, with the following functions:
1. Manage multiple kafka clusters
two。 Convenient to check kafka cluster status (topics,brokers, backup distribution, partition distribution)
3. Select the copy you want to run
4. Based on the current partition status
5. You can select a topic configuration and create a topic (0.8.1.1 and 0.8.2 configurations are different)
6. Delete topic (only versions above 0.8.2 are supported and delete.topic.enable=true is set in the broker configuration)
7.Topic list will indicate which topic has been deleted (applicable in version 0.8.2 or later)
8. Add partitions to existing topic
9. Update the configuration for an existing topic
10. Batch repartition on multiple topic
11. Batch repartition on multiple topic (optional partition broker location)
Kafka-manager project address: https://github.com/yahoo/kafka-manager
II. Deployment
2.1 initialize the environment
Initialize the system, turn off the firewall and modify the hostname and ip name
Name
HOSTNAME
IP
one
Kafka-1
172.17.10.207
two
Kafka-2
172.17.10.208
three
Kafka-3
172.17.10.209
2.2 java installation
Yum install-y java-1.8.0-openjdk
2.3 install zookeeper (all three)
Cd / usr/localwget http://apache.fayea.com/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gztar zxf zookeeper-3.4.9.tar.gzmv zookeeper-3.4.9 zookeepercd zookeeper/confcp zoo_sample.cfg zoo.cfg
Edit zoo.cfg
Heartbeat time between tickTime=2000 # services or between clients and service segments during initLimit=10 # Follower startup Time to synchronize all the latest data from Leader time between syncLimit=5 # Leader and the cluster communication time dataDir=/usr/local/zookeeper/data # zookeeper storage data datalogDir=/usr/local/zookeeper/logs # zookeeper storage data log clientPort=2181 # zookeeper default port # cluster configuration information server.1=172.17.10.207:2888:3888server .2 = 172.17.10.208:2888:3888server.3=172.17.10.209:2888:3888
Server.1 this 1 is the identity of the server or other numbers, indicating which server this is, which is used to identify the server. This ID should be written to the myid file under the snapshot directory.
# 172.17.10.207 is the IP address in the cluster. The first port is the communication port between master and slave. The default is 2888. The second port is the port elected by leader. The default port for the new election is 3888 when the cluster is started or after the leader is hung up.
Complete configuration
Cd / usr/local/zookeepermkdir data logs # create data and log folder cd dataecho "1" > myid # the second zookeeper server starts for echo 2/usr/local/zookeeper/bin/zkServer.sh start #
/ usr/local/zookeeper/bin/zkServer.sh status # View status
2.4 install kafka (all three)
Wget http://apache.fayea.com/kafka/0.10.0.0/kafka_2.11-0.10.0.0.tgztar zxf kafka_2.11-0.10.0.0.tgzmv kafka_2.11-0.10.0.0 / kafkacd kafka/config
Edit server.properties
Broker.id=1 # kafka cluster ID, cannot be the same, the first one is 1, and so on, everything else is the same. Log.dirs=/usr/local/kafka-logshost.name=172.17.10.184 # Host ipzookeeper.connect=172.17.10.185:2181172.17.10.184:2181172.17.10.183:2181mkdir / usr/local/kafka/kafka-logs/usr/local/kafka/bin/kafka-server-start.sh / usr/local/kafka/config/server.properties & # start kafka
Check to see if the startup is successful
Netstat-ntpl | grep 9092
2.4 install kafka-manager
Git clone https://github.com/yahoo/kafka-managercd kafka-managersbt clean distcd # takes a long time
Get the file kafka-manager-1.3.0.8.zip
Unzip kafka-manager-1.3.0.8.zip-d / usr/localcd / usr/local/kafka-manager-1.3.0.8 modify configuration conf/application.properties# if zk is a cluster, fill in more than one zk address kafka-manager.zkhosts= "172.17.10.185 zk 2181172.17.184"
Start
The default port of kafka-manager is 9000, which can be specified through-Dhttp.port;-Dconfig.file=conf/application.conf specified profile: nohup bin/kafka-manager-Dconfig.file=conf/application.conf-Dhttp.port=8080 &
Browser access
III. Testing
Test the Kafka. Create topic,producer,consumer separately, preferably on different nodes. Enter information on the producer console to see if the consumer console can receive it.
3.1Create topic
. / kafka-topics.sh-- create-- zookeeper 172.17.10.207 zookeeper 2181172.17.208 zookeeper 2181172.17.10.209 replication-factor 3-- partitions 3-- topic xuel
-- replication-factor specifies the number of replicas of partition. It is recommended to set it to 2.
-- partitions specifies the number of partitions. This parameter needs to be determined according to the number of broker and the amount of data. Normally, two partition on each broker is the best.
-- topic xuel theme is xuel
3.2 View topic
. / kafka-topics.sh-- describe-- zookeeper 172.17.10.207 zookeeper 2181172.17.208 zookeeper 2181172.17.10.209 topic xuel
Create topic- {1-4} through the web interface
3.3Delete topic
. / kafka-topics.sh-- delete-- zookeeper 172.17.10.207 zookeeper 2181172.17.208 zookeeper 2181172.17.10.209 topic xuel
3.4 create a publisher
Create a publisher on a server (the publisher sends messages)
Create broker
. / kafka-console-producer.sh-- broker-list 172.17.10.173 topic xuel 9092172.17.17.10.172 topic xuel
3.5 create consumers
Create a subscriber on a server (subscriber accepts messages)
. / kafka-console-consumer.sh-- zookeeper 172.17.10.173 from-beginning-- topic xuel 2181172.17.10.172 topic xuel
3.6 View through the web interface
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.