In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. Cluster planning
A 3-node Hadoop cluster is built here, in which three hosts deploy DataNode and NodeManager services, but only NameNode and ResourceManager services are deployed on hadoop001.
II. Pre-conditions
The operation of Hadoop depends on JDK and needs to be pre-installed. The installation steps are sorted out separately to:
Installation of JDK under Linux III. Configure secret-free login 3.1 generate key
Use the ssh-keygen command to generate a public and private key pair on each host:
Ssh-keygen3.2 secret-free login
Write the public key of hadoop001 to the local and remote machine ~ / .ssh / authorized_key files:
Ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop001ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop002ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop0033.3 authentication secret-free login ssh hadoop002ssh hadoop003 IV. Cluster build 3.1 download and decompress
Download Hadoop. What I download here is the CDH version of Hadoop. The download address is http://archive.cloudera.com/cdh6/cdh/5/.
# tar-zvxf hadoop-2.6.0-cdh6.15.2.tar.gz 3.2Configuring environment variables
Edit the profile file:
# vim / etc/profile
Add the following configuration:
Export HADOOP_HOME=/usr/app/hadoop-2.6.0-cdh6.15.2export PATH=$ {HADOOP_HOME} / bin:$PATH
Execute the source command to make the configuration take effect immediately:
# source / etc/profile3.3 modify configuration
Go to the ${HADOOP_HOME} / etc/hadoop directory and modify the configuration file. Each profile is as follows:
1. Hadoop-env.sh# specifies the installation location of JDK export JAVA_HOME=/usr/java/jdk1.8.0_201/2. Core-site.xml fs.defaultFS hdfs://hadoop001:8020 hadoop.tmp.dir / home/hadoop/tmp 3. Hdfs-site.xml dfs.namenode.name.dir / home/hadoop/namenode/data Dfs.datanode.data.dir / home/hadoop/datanode/data4. Yarn-site.xml yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname hadoop001 5. Mapred-site.xml mapreduce.framework.name yarn 5. Slaves
Configure the hostname or IP address of all slave nodes, one per line. The DataNode service and NodeManager service on all slave nodes will be started.
Hadoop001hadoop002hadoop0033.4 distributor
Distribute the Hadoop installation package to the other two servers, and it is recommended that you also configure the environment variables of Hadoop on these two servers.
# distribute the installation package to hadoop002scp-r / usr/app/hadoop-2.6.0-cdh6.15.2/ hadoop002:/usr/app/# and distribute the installation package to hadoop003scp-r / usr/app/hadoop-2.6.0-cdh6.15.2/ hadoop003:/usr/app/3.5 initialization
Execute the namenode initialization command on Hadoop001:
Hdfs namenode-format3.6 starts the cluster
Go to the ${HADOOP_HOME} / sbin directory of Hadoop001 and start Hadoop. At this point, related services on hadoop002 and hadoop003 will also be started:
# start dfs service start-dfs.sh# start yarn service start-yarn.sh3.7 to view the cluster
Use the jps command on each server to view the service process, or go directly to the Web-UI interface to view it, port 50070. You can see that there are three Datanode available at this time:
Click Live Nodes to enter, you can see the details of each DataNode:
Then you can check the situation of Yarn. The port number is 8088:
5. Submit services to the cluster
The method of submitting jobs to the cluster is exactly the same as that of the stand-alone environment. Here, take the example of submitting a sample program built into Hadoop to calculate Pi, which can be executed on any node. The command is as follows:
Hadoop jar / usr/app/hadoop-2.6.0-cdh6.15.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh6.15.2.jar pi 3 3
For more articles in big data's series, please see the GitHub Open Source Project: big data's getting started Guide.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.