In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
The premise is to install JDK 6 or above.
Java-version
Java version "1.7.079"
Test environment cluster structure (the hosts of the three clusters must be parsed with each other, otherwise it will be very crappy)
Server1: 192.168.100.10
Server1: 192.168.100.13
Server1: 192.168.100.20
Install the package:
Zookeeper-3.4.9.tar.gz
Kafka_2.11-0.10.1.0.tgz
Background:
In order to obtain reliable ZooKeeper services, users should deploy ZooKeeper on a cluster. As long as most of the ZooKeeper services on the cluster are started, the overall ZooKeeper services will be available. In addition, it is best to use an odd number of machines. If zookeeper had five machines, it would be able to handle two machine failures.
# Cluster installation of zookeeper #
1. Download the installation package to / usr/local/src
2. Extract to the installation path / usr/localcd
Cd / usr/local/src/
Tar-xvf zookeeper-3.4.9.tar.gz-C / usr/local/
Cd / usr/local
Ln-s zookeeper-3.4.9/ zookeeper
3. Modify the configuration file
Cd cd / usr/local/zookeeper/conf/
Cp zoo_sample.cfg zoo.cfg
Configuration file (the configuration file is the same for each zookeeper)
[root@master conf] # cat zoo.cfg
# The number of milliseconds of each tick
TickTime=2000
# The number of ticks that the initial
# synchronization phase can take
InitLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
SyncLimit=5
# the directory where the snapshot is stored.
# do not use / tmp for storage, / tmp here is just
# example sakes.
DataDir=/usr/local/zookeeper/data
# the port at which the clients will connect
ClientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
# maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
# autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
# autopurge.purgeInterval=1
# the first port (port) is the port from the slave (follower) machine to the master (leader) machine
# the second port is the port used for leader election
Server.1=192.168.100.10:2888:3888
Server.2=192.168.100.13:2888:3888
Server.3=192.168.100.20:2888:3888
# Notes:
In this file, we need to specify the value of dataDir, which points to a directory that needs to be empty at the beginning. Here is the meaning of each parameter:
TickTime: basic event unit, in milliseconds. It is used to indicate the heartbeat, and the minimum session expires twice as long as the tickTime. .
DataDir: the location where the database snapshot is stored in memory. If you do not set parameters, the update transaction log will be stored in the default location.
ClientPort: the port that listens for client connections
# server.A=B:C:D where An is a number indicating which server this is; B is the IP address of the server; C is the port on which the server exchanges information with the "leader" in the cluster; and when the leader fails, D represents the port on which the server communicates with each other during the election.
4. Create a data directory and a myid file
Mkdir / usr/local/zookeeper/data
Echo "1" > / usr/local/zookeeper/data/myid
# configure other nodes
Myid would be different.
5. Start the cluster
Just configure according to the above.
Step 5: start the ZooKeeper cluster
On each node of the ZooKeeper cluster, execute the script to start the ZooKeeper service, as follows:
Cd / usr/local/zookeeper/
Bin/zkServer.sh start
Bin/zkServer.sh start
Bin/zkServer.sh start
View log: (log generation is in the directory where startup is performed)
Tail-f zookeeper.out
Check the listening port: (only the leader listens to port 2888, follower does not listen, only listens to port 3888)
[root@agent zookeeper] # netstat-tulnp | grep 88
Tcp 00:: ffff:192.168.100.13:3888: * LISTEN 18526/java
Tcp 00:: ffff:192.168.100.13:2888: * LISTEN 18526/java
[root@agent zookeeper] # netstat-tulnp | grep 2181
Tcp 0 0: 2181: * LISTEN 18526/java
[root@agent zookeeper] #
6 Verification
. / bin/zkServer.sh status
Note: because the startup sequence starts from the first one, there will be a log in the first one because the second and third one has not been started yet, so it will be ignored in a while.
You can see that each role (selected by leader indicates that the cluster is normal) can be seen through the above status query result. The second is the Leader of the cluster, and the remaining two nodes are Follower.
[root@agent zookeeper] #. / bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: / usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
7. Test the client command connection
. / bin/zkCli.sh-server 192.168.100.10purl 2181
After connecting:
Execute a command
[zk: 192.168.100.10 ls 2181 (CONNECTED) 4]
[zookeeper]
Note: the current root path is / zookeeper.
# install kafka
Download the package
[root@master src] # ll kafka_2.11-0.10.1.0.tgz
-rw-r--r-- 1 root root 34373824 Oct 20 2016 kafka_2.11-0.10.1.0.tgz
1. Extract to the installation directory
Tar-xvf kafka_2.11-0.10.1.0.tgz-C / usr/local/
Cd / usr/local/
Ln-s kafka_2.11-0.10.1.0 / kafka
2. Modify the configuration file
Cd / usr/local/kafka/config
Vim server.properties
Items that need to be modified:
# The id of the broker. This must be set to a unique integer for each broker
Broker.id=1
# A comma seperated list of directories under which to store log files
Log.dirs=/usr/local/kafka/logs
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
Zookeeper.connect=192.168.100.10:2181192.168.100.13:2181192.168.100.20:2181
Note: the broer.id of each kafka is different.
3. Create a log directory
Mkdir / usr/local/kafka/logs
4. # configure other nodes
5. Start
Start the Kafka cluster from the background (all 3 need to be started)
Cd / usr/local/kafka/bin/./kafka-server-start.sh-daemon.. / config/server.properties
[root@master bin] # jps
7449 Jps
7427 Kafka # kafka process
31341 QuorumPeerMain # zk process
If (all 3 kafka processes are normal, the configuration file error is that the service cannot be started)
View the startup log:
Tail-f / usr/local/kafka/logs/server.log
6. Test the kafka cluster
1-enter the kafka root directory and create a topic test
. / bin/kafka-topics.sh-- create-- zookeeper 192.168.100.10 zookeeper 2181192.168.100.13 zookeeper 2181192.168.100.20 replication-factor 1-- partitions 1-- topic test
Results:
Created topic "test".
2-list the created topic
. / bin/kafka-topics.sh-- list-- zookeeper 192.168.100.10 zookeeper 2181192.168.100.13 zookeeper 2181192.168.100.20
Results:
Test
Or use the zookeeper command to check
. / bin/zkCli.sh-server 192.168.100.10purl 2181
[zk: 192.168.100.10 ls 2181 (CONNECTED) 3] brokers/topics
[test]
3-impersonate the client to send messages
. / bin/kafka-console-producer.sh-- broker-list 192.168.100.10 topic test 9092192.168.100.13 topic test
4-simulate the client to receive the message (although you see the consumption, but because the consumption does not delete the simulation information before each execution, you will still see it)
. / bin/kafka-console-consumer.sh-- zookeeper 192.168.100.10 topic test 2181192.168.100.13 from-beginning 2181192.168.100.20 topic test
5. Stop starting (restart)
The cluster kafka needs to be stopped in turn.
. / bin/kafka-server-stop.sh
Cluster kafka needs to be started in turn.
Cd / usr/local/kafka/bin/./kafka-server-start.sh-daemon.. / config/server.properties
6. Delete the topic of the tested test
. / bin/kafka-topics.sh-- delete-- zookeeper 192.168.100.10 zookeeper 2181192.168.100.13 zookeeper 2181192.168.100.20 topic test
Result
Topic test is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
Because:
If server.properties is not configured in the configuration file loaded when kafaka starts (the default is false) delete.topic.enable=true, then the deletion is not a real deletion, but marks the topic as: marked for deletion
Completely delete access to zk
. / bin/zkCli.sh-server 192.168.100.10purl 2181
Rmr / brokers/topics/test
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.