In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
1. Modify the machine names of all hosts
[root@hadoop1 ~] # vi / etc/networks
Hostname=hadoop1
2. Mapping between host and IP
[root@hadoop1 ~] # vi / etc/hosts192.168.5.136 hadoop1192.168.5.137 hadoop3192.168.5.138 hadoop2
One of them can be copied after modification.
Scp-r / etc/hosts root@hadoop1\ 2:/etc
3. SSH is login-free
[root@hadoop1 ~] # ssh-keygen-t rsa-P''execute once on each machine
Delete all files under / root/.ssh/ on other machines
[root@hadoop1 tmp] # scp-r / root/.ssh/id_rsa.pub root@hadoop2:/root/.ssh/authorized_keys copy the hadoop1 public key to all servers
[root@hadoop1 ~] # mv / root/.ssh/id_rsa.pub / root/.ssh/authorized_keys finally modify the local public key file.
Finally verify [root@hadoop1 ~] # ssh hadoop2
[root@hadoop1 ~] # ssh hadoop3
4. Turn off the firewall-all servers execute
[root@hadoop1 ~] # systemctl stop firewalld.service
[root@hadoop1 ~] # systemctl disable firewalld.service
5. Time synchronization
2. Setting up Hadoop environment
1. JAVA configuration
[root@hadoop1 software] # tar-zxvf jdk-8u171-linux-x64.tar.gz decompression
[root@hadoop1 software] # mv jdk1.8.0_171/ / usr/java move to the specified directory
# configure environment variable [root@hadoop1 sbin] # vi / etc/profile
Export JAVA_HOME=/tmp/jdk1.8.0_171
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
Export PATH=$JAVA_HOME/bin:$PATH
Finally, copy the java to other machines and modify the environment variables
Scp-r / usr/java/ root@hadoop2:/usr/
Scp-r / usr/java/ root@hadoop3:/usr/
Modify environment variabl
[root@hadoop3 usr] # vi / etc/profile
[root@hadoop3 usr] # source / etc/profile takes effect
2. Hadoop configuration compilation and installation
Configure on one of the servers and then synchronize to other machines
[root@hadoop1 software] # tar-zxvf hadoop-2.7.7.tar.gz decompress Hadoop
[root@hadoop1 software] # cd hadoop-2.7.7 log in to the hadoop directory
Configure environment variables
[root@hadoop1 hadoop-2.7.7] # vi / etc/profile
Export JAVA_HOME=/usr/java
Export HADOOP_HOME=/opt/software/hadoop-2.7.7
Export PATH=$HADOOP_HOME/bin:$FINDBUGS_HOME/bin:$PROTOC_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
[root@hadoop1 hadoop-2.7.7] # source / etc/profile takes effect
3. Modify the configuration file
[root@hadoop1 hadoop] # cd / opt/hadoop/etc/hadoop/ switch this file directory settings configuration file
It mainly modifies core-site.xml, hdfs-site.xml, mapred-site.xml and yarn-site.xml.
Modify core-site configuration file
Vi core-site.xml
Fs.defaultFS
Hdfs://hadoop1:8020
Hadoop.tmp.dir
/ opt/software/hadoop-2.7.7/data/tmp
Modify hdfs-site.xml configuration file
Dfs.namenode.secondary.http-address
Hadoop3:50090
Modify mapred-site.xml configuration file
[root@hadoop1 hadoop] # cp mapred-site.xml.template mapred-site.xml
[root@hadoop1 hadoop] # vi mapred-site.xml Editing configuration File
Mapreduce.framework.name
Yarn
Mapreduce.jobhistory.address
Hadoop1:10020
Mapreduce.jobhistory.webapp.address
Hadoop1:19888
Modify yarn-site.xml configuration file
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Yarn.resourcemanager.hostname
Hadoop2
Yarn.log-aggregation-enable
True
Yarn.log-aggregation.retain-seconds
106800
Modify the slaves file
[root@hadoop1 hadoop] # vi slaves
Hadoop1
Hadoop2
Hadoop3
After completing these configurations on one machine (preferably hadoop1), we use the scp command to transfer these configurations to other machines
Enter:
Hadoop environment transmission
[root@hadoop1 hadoop] # scp-r / opt/hadoop/ root@hadoop2:/opt/
[root@hadoop1 hadoop] # scp-r / opt/hadoop/ root@hadoop3:/opt/
Configure other node environment variables
[root@hadoop2 software] # vi / etc/profile
Export JAVA_HOME=/usr/java
Export HADOOP_HOME=/opt/software/hadoop-2.7.7
Export PATH=$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
[root@hadoop2 software] # source / etc/profile refresh takes effect
-after the transfer, the cluster is started on the primary node. Before starting hadoop, you need to initialize it. You only need to initialize it on hadoop1.
4. Start hadoop
-- perform formatting on the NameNode machine:
[root@hadoop1 hadoop] # / opt/software/hadoop-2.7.7/bin/hdfs namenode-format format command
Note:
If you want to reformat, you need to clean up the cluster ID in the / opt/software/hadoop-2.7.7/data/tmp/dfs/name/current/ directory in the hadoop1 namenode datanode file directory, otherwise reformat
Formatting will generate a new ID, which is inconsistent with the datanode cluster ID
5. Start HDFS
[root@hadoop1 hadoop] # / opt/software/hadoop-2.7.7/sbin/start-dfs.sh
6. Start YARN
[root@hadoop1 hadoop] # / opt/software/hadoop-2.7.7/sbin/start-yarn.sh
7. Start ResourceManager on hadoop2
[root@hadoop2 software] # / opt/software/hadoop-2.7.7/sbin/yarn-daemon.sh start resourcemanager
8. Start the log server on hadoop3
[root@hadoop3 ~] # / opt/software/hadoop-2.7.7/sbin/mr-jobhistory-daemon.sh start historyserver
9. View the YARN WEB page
Http://192.168.5.138:8088/cluster
10. Open the HDFS WEB page
Http://192.168.5.136:50070/
After deployment, you can learn hadoop
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.