In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
The foregoing:
This document starts on the basis of three virtual machines ping each other, firewall off, hosts file modification, SSH password-free login, hostname modification and so on.
I. incoming file
1. Create an installation directory
Mkdir / usr/local/soft
two。 Open xftp, find the corresponding directory, and pass in the required installation package
Check the installation package: cd / usr/local/soft
II. Install JAVA
1. Check to see if jdk: java-version is installed
two。 Not installed, extract java installation package: tar-zxvf jdk-8u181-linux-x64.tar.gz
(everyone may install the package differently, please refer to it yourself.)
3. Rename jdk and view the current location: mv jdk1.8.0_181 java
4. Configure the jdk environment: vim / etc/profile.d/jdk.sh
Export JAVA_HOME=/usr/local/soft/javaexport PATH=$PATH:$JAVA_HOME/binexport CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/rt.jar
5. Update environment variables and verify: source / etc/profile
III. Install Hadoop
1. Extract the hadoop installation package: tar-zxvf hadoop-3.1.1.tar.gz
two。 View and rename: mv hadoop-3.1.1 hadoop
3. Configure hadoop Profil
3.1 modify core-site.xml configuration file: vim hadoop/etc/hadoop/core-site.xml
Fs.defaultFShdfs://master:9000hadoop.tmp.dirfile:/usr/local/soft/hadoop/tmpAbase for other temporary directories. Fs.trash.interval 1440
3.2 modify the hdfs-site.xml configuration file: vim hadoop/etc/hadoop/hdfs-site.xml
Dfs.namenode.secondary.http-addressnode1:50090dfs.replication3dfs.namenode.name.dirfile:/usr/local/soft/hadoop/tmp/dfs/namedfs.datanode.data.dirfile:/usr/local/soft/hadoop/tmp/dfs/data
3.3 modify workers configuration file: vim hadoop/etc/hadoop/workers
3.4 modify the hadoop-env.sh file: vim hadoop/etc/hadoop/hadoop-env.sh
Export JAVA_HOME=/usr/local/soft/java
3.5 modify the yarn-site.xml file: vim hadoop/etc/hadoop/yarn-site.xml
Yarn.resourcemanager.hostnamemasteryarn.nodemanager.aux-servicesmapreduce_shuffle
3.6 Update profile: source hadoop/etc/hadoop/hadoop-env.sh
3.7Modification of start-dfs.sh configuration file: im hadoop/sbin/start-dfs.sh
Export HDFS_NAMENODE_SECURE_USER=rootexport HDFS_DATANODE_SECURE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport YARN_RESOURCEMANAGER_USER=rootexport YARN_NODEMANAGER_USER=root
3.8 modify stop-dfs.sh configuration file: vim hadoop/sbin/stop-dfs.sh
Export HDFS_NAMENODE_SECURE_USER=rootexport HDFS_DATANODE_SECURE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport YARN_RESOURCEMANAGER_USER=rootexport YARN_NODEMANAGER_USER=root
3.9 modify the start-yarn.sh configuration file: vim hadoop/sbin/start-yarn.sh
Export YARN_RESOURCEMANAGER_USER=rootexport HADOOP_SECURE_DN_USER=rootexport YARN_NODEMANAGER_USER=root
3.10 modify stop-yarn.sh configuration file: vim hadoop/sbin/stop-yarn.sh
Export YARN_RESOURCEMANAGER_USER=rootexport HADOOP_SECURE_DN_USER=rootexport YARN_NODEMANAGER_USER=root
3.11 cancel printing warning message: vim hadoop/etc/hadoop/log4j.properties
Log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
4. Synchronize configuration information:
1. Synchronous node1:scp-r soft root@node1:/usr/local/
Synchronous node2:scp-r soft root@node2:/usr/local/
two。 Wait for all transfers to complete, configure the profile file: vim / etc/profile.d/hadoop.sh
# SET HADOOPexport HADOOP_HOME=/usr/local/soft/hadoopexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/binexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
3. Continue to transmit
To node1: scp / etc/profile.d/jdk.sh root@node1:/etc/profile.d/
Scp / etc/profile.d/hadoop.sh root@node1:/etc/profile.d/
To node2: scp / etc/profile.d/jdk.sh root@node2:/etc/profile.d/
Scp / etc/profile.d/hadoop.sh root@node2:/etc/profile.d/
4. Execute on all three virtual machines
Source / etc/profile
Source / usr/local/soft/hadoop/etc/hadoop/hadoop-env.sh
(only one is displayed)
5. Format the HDFS file system: hdfs namenode-format (on master only)
Five. start the cluster
Cd / usr/local/soft/hadoop/sbin/
. / start-all.sh
Enter jps on each of the three virtual machines after startup
The results are as follows:
Google browser test under windows:
Http://192.168.204.120:8088/cluster (enter the ip address of your own master)
Http://192.168.204.120:9870
Hadoop test (MapReduce performs computational test):
Hadoop jar/usr/local/soft/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar wordcount / input / output
View the running results:
The above hadoop configuration is complete.
Summary
The above is the complete distributed installation guide for hadoop3.1.1 under centos6.8 introduced by Xiaobian. I hope it will be helpful to you. If you have any questions, please leave me a message and the editor will reply you in time. Thank you very much for your support to the website!
If you think this article is helpful to you, you are welcome to reprint it, please indicate the source, thank you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.