In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
zookeeper+kafka cluster deployment +storm cluster
I. Environmental preparation before installation:
Prepare three machines.
Operating System: Centos 6.8
jdk:jdk-8u111-linux-x64.gz
zookeeper:zookeeper-3.4.11.tar.gz
kafka: kafka_2.11-1.0.1.tgz
storm:apache-storm-1.2.2.tar.gz
Configure/etc/hosts
vi /etc/hosts
192.168.1.211 canal01
192.168.1.212 canal02
192.168.1.213 canal03
II. JDK installation (three sets)
2.1 decompression software
tar zxvf jdk-8u111-linux-x64.gz
mv jdk-8u111-linux-x64 /usr/local/jdk
2.2 Configure environment variables
vi /etc/profile
#java
JAVA_HOME=/usr/local/jdk
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=.:$ JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export PATH
export CLASSPATH
environment variables to take effect
source /etc/profile
Third, zookeeper cluster installation (three operations)
3.1 decompression software
tar zxvf zookeeper-3.4.11.tar.gz
mv zookeeper-3.4.11 /usr/local/zookeeper
cd /usr/local/zookeeper/conf
mv mv zoo_sample.cfg zoo.cfg
3.2, configure zoo.cfg
vi zoo.cfg
#modify
dataDir=/usr/local/zookeeper/data
#Add
dataLogDir=/usr/local/zookeeper/logs
server.1=192.168.1.211:2888:3888
server.2=192.168.1.212:2888:3888
server.3=192.168.1.213:2888:3888
3.3 Create a directory
mkdir /usr/local/zookeeper/data
mkdir /usr/local/zookeeper/logs
Operated on 192.168.1.211
echo "1" >/usr/local/zookeeper/data/myid
Operated on 192.168.1.212
echo "2" >/usr/local/zookeeper/data/myid
Operated on 192.168.1.213
echo "3" >/usr/local/zookeeper/data/myid
3.4 Start Zookeeper
cd /usr/local/zookeeper/bin/
./ zkServer.sh start
3.5 View status
cd /usr/local/zookeeper/bin/
./ zkServer.sh status
Note: Check the status of zookeeper cluster, Mode:follower or Mode:leader means success
IV. Installation of kafka cluster (operating on three sets)
4.1 decompression software
tar zxvf kafka_2.11-1.0.1.tgz
mv kafka_2.11-1.0.1 /usr/local/kafka
4.2 Configure kafka (operate separately on three sets)
Operated on 192.168.1.211
cd /usr/local/kafka/config/
cp server.properties server0.properties
vi server0.properties
#modify
broker.id=0
zookeeper.connect=192.168.1.211:2181,192.168.1.212:2181,192.168.1.213:2181
Operated on 192.168.1.212
cd /usr/local/kafka/config/
cp server.properties server0.properties
vi server0.properties
#modify
broker.id=1
zookeeper.connect=192.168.1.211:2181,192.168.1.212:2181,192.168.1.213:2181
Operated on 192.168.1.213
cd /usr/local/kafka/config/
cp server.properties server0.properties
vi server0.properties
#modify
broker.id=2
zookeeper.connect=192.168.1.211:2181,192.168.1.212:2181,192.168.1.213:2181
4.3 Start kafka(background startup)
/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server0.properties
Check logs for success
cd /usr/local/kafka/logs
tail -200 kafkaServer.out
The above figure shows success without error
V. Storm cluster installation (three units are operating)
5.1, decompression software
tar zxvf apache-storm-1.2.2.tar.gz
mv apache-storm-1.2.2 /usr/local/storm
5.2, configure storm
cd /usr/local/storm/conf/
vi storm.yaml
##Configuration
storm.zookeeper.servers:
- "192.168.1.211"
- "192.168.1.212"
- "192.168.1.213"
storm.zookeeper.port: 2181
nimbus.seeds: ["canal01"]
storm.local.dir: "/usr/local/storm/storm-local"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
storm.health.check.dir: "healthchecks"
storm.health.check.timeout.ms: 5000
5.3 Configure environment variables
vi /etc/profile
##storm
export STORM_HOME=/usr/local/storm
export PATH=${STORM_HOME}/bin:$PATH
environment variables to take effect
source /etc/profile
5.4 Start Storm
Operated on 192.168.1.211
storm nimbus >/dev/null 2>&1 &
storm ui &
Operated on 192.168.1.212
storm supervisor >/dev/null 2>&1 &
Operated on 192.168.1.213
storm supervisor >/dev/null 2>&1 &
Visit strom http://192.168.1.211:8080
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.