In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. demand
Two sets of environments are deployed locally, one for the development environment and the other for the test environment. Here, I test this set of environment, do a summary, the specific installation steps together, do not explain in detail here, if necessary, I can provide a summary document for reference. Eleven machines are used in each environment. Note that all of them are cdm5.5.0 versions.
Http://archive.cloudera.com/cdh6/cdh/5/
II. Cluster planning
1. Test cluster
2. Real-time environment
III. Dependency statement
The backend system includes the following components: zookeeper, hdfs, yarn, jobhistoryserver, hive metastore, hiveserver2, hbase, kafka and jstorm. These components have successive dependencies, as shown in the following figure
In the image above, the components of the upper layer depend on the components of the lower layer, so start from the bottom up and stop from the top down when you stop.
4. Backstage operation and maintenance
1. Test environment deployment and
Zookeeper:testhadoop240 、 testhadoop241 、 testhadoop242
Hadoop:testhadoop231 、 testhadoop232 、 testhadoop233 、 testhadoop234 、 testhadoop235 、 testhadoop236
Hive:testhadoop233
Hbase:testhadoop231 、 testhadoop232 、 testhadoop233
Kafka:testhadoop240 、 testhadoop241 、 testhadoop242
Jstorm:testhadoop240 、 testhadoop241 、 testhadoop242
Mysql,nginx:testhadoop245
2, start and stop command description
2.1 zookeeper
Deployment machines: testhadoop240, testhadoop241, testhadoop242
Deployment location: / usr/local/zookeeper
Startup command: / usr/local/zookeeper/bin/zkServer.sh start (executed separately on each machine)
Stop command: / usr/local/zookeeper/bin/zkServer.shstop (executed separately on each machine)
Start verification: / usr/local/zookeeper/bin/zkCli.sh-server testhadoop240:2181
2.2 hdfs
Deployment machines: testhadoop231, testhadoop232, testhadoop233, testhadoop234, testhadoop235, testhadoop236
Deployment location: / usr/local/hadoop
Launch command: / usr/local/hadoop/sbin/start-dfs.sh (executed on testhadoop231 or testhadoop232)
Stop command: / usr/local/hadoop/sbin/stop-dfs.sh (execute on testhadoop231 or testhadoop232)
Start authentication: access http://test.hdfs1.xxx.com or http://test.hdfs2.xxx.com
2.3 yarn
Deployment machine: same as hdfs
Deployment location: same as hdfs
Launch command: / usr/local/hadoop/sbin/start-yarn.sh (executed on testhadoop231 or testhadoop232)
Stop command: / usr/local/hadoop/sbin/stop-yarn.sh (execute on testhadoop231 or testhadoop232)
Start authentication: access http://test.rm1.xxx.com or http://test.rm2.xxx.com
2.4 jobhistoryserver
Deployment machine: testhadoop231
Deployment location: same as hdfs
Start the command: / usr/local/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver
Stop command: / usr/local/hadoop/sbin/mr-jobhistory-daemon.shstop historyserver
Start authentication: access http://test.mapreduce.xxx.com
2.5 hive metastore
Deployment machine: testhadoop233
Deployment location: / usr/local/hive
Start the command: / usr/local/hive/bin/start-metastore.sh
Stop command: ps-ef | grep MetaStore finds the process number and kill it.
Start verification: to be added
2.6 hiveserver2
Deployment machine: testhadoop233
Deployment location: / usr/local/hive
Start the command: / usr/local/hive/bin/start-hiveserver2.sh
Stop command: ps-ef | grep HiveServer2 finds the process number and kill it.
Start verification: to be added
2.7 hbase
Deployment machines: testhadoop231, testhadoop232, testhadoop233
Deployment location: / usr/local/hbase
Start the command: / usr/local/hbase/bin/start-hbase.sh (testhadoop231)
Stop command: / usr/local/hbase/bin/stop-hbase.sh (testhadoop231)
Start validation: http://test.hbase.xxx.com
2.8 kafka
Deployment machines: testhadoop240, testhadoop241, testhadoop242
Deployment location: / usr/local/kafka_2.9.2-0.8.2.2
Start the command: / usr/local/kafka_2.9.2-0.8.2.2/starKafkaServer.sh
Stop command: ps-ef | grep kafka finds the process number and kill it.
Initiate validation:
Log in to any of the machines in dchadoop213, dchadoop214, dchadoop215 using the hadoop2 user and execute the following command:
a. Create topic
/ usr/local/kafka_2.9.2-0.8.2.2/bin/kafka-topics.sh--zookeeper devhadoop237:2181,devhadoop238:2181,devhadoop239:2181/kafka--create-topic mytest-replication-factor 1-partitions 3
b. View topic list
/ usr/local/kafka_2.9.2-0.8.2.2/bin/kafka-topics.sh--zookeeper devhadoop237:2181,devhadoop238:2181,devhadoop239:2181/kafka-- list
c. Create a producer
/ usr/local/kafka_2.9.2-0.8.2.2/bin/kafka-console-producer.sh--broker-list devhadoop237:9092,devhadoop238:9092,devhadoop239:9092-- topicmytest
d. Create consumers
/ usr/local/kafka_2.9.2-0.8.2.2/bin/kafka-console-consumer.sh--zookeeper devhadoop237:2181,devhadoop238:2181,devhadoop239:2181/kafka-topicmytest-from-beginning
Where mytest is the topic specified by the user, and the actual development needs to customize the topic according to the business.
2.9 jstorm
Deployment machines: testhadoop240, testhadoop241, testhadoop242
Deployment location: / usr/local/jstorm-2.1.0/
Start the command:
Start nimbus:nohupjstorm nimbus & (execute on testhadoop240)
Start supervisor:nohupjstorm supervisor & (execute on testhadoop240~242)
Start web-ui:/usr/local/tomcat/bin/startup.sh
Stop the command:
Stop nimbus:ps-ef | grep nimbus finds the process number and kill it (execute on testhadoop240)
Stop supervisor:ps-ef | grep supervisor finds the process number and kill it (execute on testhadoop240~242)
Stop web-ui:/usr/local/tomcat/bin/shutdown.sh
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.