In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly shows you "how to build a Hadoop cluster", the content is easy to understand, clear, hope to help you solve your doubts, let the editor lead you to study and learn "how to build a Hadoop cluster" this article.
How to build a Hadoop cluster:
1.1 determine the deployment of three nodes, namely hadoop0,hadoop1,hadoop2.
Where hadoop0 is the master node (NameNode, JobTracker, SecondaryNamenode), hadoop1 and hadoop2 are slave nodes
Point (DataNode, TaskTracker).
1.2 decompress the Linux image file and configure the Linux initialization environment, see Hadoop pseudo-distributed Environment Construction (26).
Delete the logs and tmp folders in the / usr/local/hadoop directory on hadoop0.
Close the pseudo-distributed hadoop on hadoop0 before deleting it. Operation: stop-all.sh
1.4 set ssh password-free login between nodes
(1) on hadoop1, copy the public key on hadoop1 to hadoop0 and execute the command ssh-copy-id-I hadoop0
(2) on hadoop2, copy the public key on hadoop2 to hadoop0 and execute the command ssh-copy-id-I hadoop0
(3) copy the authorized_keys on hadoop0 to hadoop1,hadoop2 and execute the command
Scp / root/.ssh/authorized_keys hadoop1:/root/.ssh
Scp / root/.ssh/authorized_keys hadoop2:/root/.ssh
1.5 modify the file / etc/hosts on hadoop0, and the contents are as follows:
192.168.80.100 hadoop0
192.168.80.101 hadoop1
192.168.80.102 hadoop2
Copy the configuration under / etc/hosts on hadoop0 to other hadoop1 and hadoop2 nodes.
1.7 copy / usr/local/jdk and / usr/local/hadoop on hadoop0 to the directory of hadoop1 and hadoop2.
Scp-r / usr/local/jdk hadoop1:/usr/local
Scp-r / usr/local/jdk hadoop2:/usr/local
Scp-r / usr/local/hadoop hadoop1:/usr/local
Scp-r / usr/local/hadoop hadoop2:/usr/local
1.8 copy / etc/profile on hadoop0 to hadoop1, hadoop2.
Scp hadoop0:/etc/profile hadoop1:/etc/profile
Scp hadoop0:/etc/profile hadoop2:/etc/profile
And execute source / etc/profile on hadoop1 and hadoop2 respectively
1.9 modify the configuration information of the cluster, modify the configuration information of the hadoop0 node, and other nodes do not need to modify
(1) the location of the NameNode node is defined by fs.default.name in the configuration file $HADOOP_HOME/conf/core-site.xml.
(2) the location of the JobTracker node is specified by mapred.job.tracker in the configuration file $HADOOP_HOME/conf/mapred-site.xml
Righteous.
(3) the location of the SecondaryNameNode node is defined in the configuration file $HADOOP_HOME/conf/masters, and the content is modified to hadoop0.
(4) the location of the DataNode and TaskTracker nodes is defined in the configuration file $HADOOP_HOME/conf/slaves, and the content is modified to
Hadoop1 、 hadoop2 .
1.10 execute the format command hadoop namenode-format on hadoop0
1.11 start the cluster on hadoop0 and execute the command start-all.sh
A method to dynamically add new slave nodes:
2.1Determine hadoop0 as slave node.
2.2 modify the slaves file on hadoop0 and add hadoop0.
2.3 start the DataNode and TaskTracker processes on hadoop0 and execute the command:
Hadoop-daemon.sh start datanode
Hadoop-daemon.sh start tasktracker
2.4 refresh the cluster node structure on hadoop0 and execute the command
Hadoop dfsadmin-refreshNodes
3.0 modify the number of copies
Hadoop fs-setrep 2 / hello
4.0 Security Mode
When the cluster is first started, it enters safe mode, which defaults to 30 seconds.
In safe mode, the system checks the block.
During safe mode, the create and delete operations of the client are prohibited.
Hadoop dfsadmin-safeMode leave | get | enter
These are all the contents of the article "how to build a Hadoop Cluster". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.