In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
I. configure the environment
1. Set hostname and corresponding address mapping
[root@master ~] # cat / etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.230.130 master192.168.230.131 slave1192.168.230.100 slave2# configure hostname and hosts for three devices respectively
two。 Create new hadoop users on each of the three nodes
[root@master] # tail-1 / etc/passwdhadoop:x:1001:1001::/home/hadoop:/bin/bash
Configure ssh secret-free login between all nodes for hadoop
1. Generate key
[hadoop@master] $ssh-keygen-t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/ home/hadoop/.ssh/id_rsa): / home/hadoop/.ssh/id_rsa already exists.Overwrite (YPIO)? YEnter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in / home/hadoop/.ssh/id_rsa.Your public key has been saved in / home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:1c:16:61:04:4f:76:93:cd:da:9a:08:04:15:58:7d:96 hadoop@masterThe key's randomart p_w_picpath is:+-- [RSA 2048]-+ |. = B.o= |. . = .oE.o | |. + o o | | .o.. . | | .s. O | |. O | +-+ [hadoop@master ~] $
two。 Send public key
[hadoop@master] $ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@slave1The authenticity of host 'slave1 (192.168.230.131)' can't be established.ECDSA key fingerprint is 32:1a:8a:37:f8:11:bc:cc:ec:35:e6:37:c2:b8:e1:45.Are you sure you want to continue connecting (yes/no)? Yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key (s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key (s) remain to be installed-- if you are prompted now it is to install the new keyshadoop@slave1's password: Number of key (s) added: 1Now try logging into the machine With: "ssh 'hadoop@slave1'" and check to make sure that only the key (s) you wanted were added. [hadoop@master ~] $[hadoop@master ~] $ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@slave2 [hadoop@master ~] $ssh-copy-id-I ~ / .ssh/id_rsa.pub hadoop@master#slave1 and slave2 for other nodes
3. Verify landing
[hadoop@master ~] $ssh hadoop@slave1Last login: Wed Jul 26 01:11:22 2017 from master [hadoop@slave1 ~] $exitlogoutConnection to slave1 closed. [hadoop@master ~] $ssh hadoop@slave2Last login: Wed Jul 26 13:12:00 2017 from master [hadoop@slave2] $exitlogoutConnection to slave2 closed. [hadoop@master ~] $
3. Configure JAVA
1. Use xftp to upload hadoop-2.7.3.tar.gz and jdk-8u131-linux-x64.tar.gz to master
[hadoop@master ~] $lshadoop-2.7.3.tar.gz jdk-8u131-linux-x64.tar.gz
two。 Use root user to extract and move to / usr/local
[hadoop@master ~] $exitexit [root@master ~] # cd / home/hadoop/ [root@master hadoop] # lshadoop-2.7.3.tar.gz jdk-8u131-linux-x64.tar.gz [root@master hadoop] # tar-zxf jdk-8u131-linux-x64.tar.gz [root@master hadoop] # lshadoop-2.7.3.tar.gz jdk1.8.0_131 jdk-8u131-linux-x64.tar.gz [root@master hadoop] # mv jdk1.8.0 _ 131 / usr/local/ [root@master hadoop] # cd / usr/local/ [root@master local] # lsbin etc games include jdk1.8.0_131 lib lib64 libexec sbin share src [root@master local] #
3. Configure java environment variables (global variables are used here)
[root@master ~] # vim / etc/profile # add the following java environment variable [root@master ~] # tail-5 / etc/profileexport JAVA_HOME=/usr/local/jdk1.8.0_131 # Note the jdk version of export JRE_HOME=$JAVA_HOME/jre export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib export PATH=$JAVA_HOME/bin:$ path [root @ master ~] # [root@master ~] # source / etc/profile # to make the configuration effective
4. Test whether the java on master is configured
[root@master ~] # java-versionjava version "1.8.0mm 131" Java (TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot (TM) 64-Bit Server VM (build 25.131-b11, mixed mode) [root@master ~] #
5. Copy jdk to slave1 and slave2 using scp
[root@master] # scp-r / usr/local/jdk1.8.0_131/ root@slave1:/usr/local/ [root@master ~] # scp-r / usr/local/jdk1.8.0_131/ root@slave2:/usr/local/
6. Configure the environment variables on slave1 and slave2 (same as step 3), and verify them with java-version after configuration
4. Configure the hadoop environment
1. Extract hadoop and move to / usr/local
[root@master ~] # cd / home/hadoop/ [root@master hadoop] # lshadoop-2.7.3.tar.gz jdk-8u131-linux-x64.tar.gz [root@master hadoop] # tar-zxf hadoop-2.7.3.tar.gz [root@master hadoop] # mv hadoop-2.7.3 / usr/local/hadoop [root@master hadoop] # ls / usr/local/bin etc games hadoop include jdk1.8.0_131 lib lib64 libexec sbin share src
two。 Change the user to which the file of hadoop belongs
[root@master ~] # cd / usr/local [root@master local] # chown-R hadoop:hadoop / usr/local/hadoop [root@master local] # lldrwxr-xr-x 9 hadoop hadoop 149 Aug 17 2016 hadoop [root@master local] #
3. Configure hadoop environment variables
[root@master local] # vim / etc/profile [root@master local] # tail-4 / etc/profile#hadoopexport HADOOP_HOME=/usr/local/hadoop # Note the path export PATH= "$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin" [root@master local] # [root@master local] # source / etc/profile# to make the configuration effective
4. test
[root@master local] # hadoop versionHadoop 2.7.3Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git-r baa91f7c6bc9cb92be5982de4719c1c8af91ccffCompiled by root on 2016-08-18T01:41ZCompiled with protoc 2.5.0From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4This command was run using / usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar [root@master local] #
5. Configure hadoop-env.sh
[root@master local] # cd $HADOOP_HOME/etc/ Hadoop [root @ master hadoop] # pwd/usr/local/hadoop/etc/hadoop [root@master hadoop] # [root@master hadoop] # vim hadoop-env.sh [root@master hadoop] # tail-1 hadoop-env.sh export JAVA_HOME=/usr/local/jdk1.8.0_131 # add [root@master hadoop] # at the end
6. Configure core-site.xml
Fs.defaultFS hdfs://master:9000
7. Configure hdfs-site.xml
Dfs.replication 1 # directory does not exist and needs to be created manually, and change it to hadoop dfs.namenode.name.dir / usr/local/hadoop/dfs/name # directory does not need to be created manually, and change it to hadoop dfs.datanode.data.dir / usr/local/hadoop/dfs/data
8. Configure yarn-site.xml
Yarn.resourcemanager.hostname master yarn.nodemanager.aux-services mapreduce_shuffle
9. Configure mapred-site.xml
[root@master hadoop] # cp mapred-site.xml.template mapred-site.xml [root@master hadoop] # vim mapred-site.xml mapreduce.framework.name yarn
10. Configure slaves
[root@master hadoop] # vim slaves [root@master hadoop] # cat slaves slave1slave2 [root@master hadoop] #
11. Use scp to transfer the configured hadoop to slave1 and slave2 nodes
[root@master] # scp-r / usr/local/hadoop root@slave1:/usr/local/ [root@master ~] # scp-r / usr/local/hadoop root@slave2:/usr/local/
twelve。 Configure the environment variables on slave1 and slave2 (same as step 3), and verify them with hadoop version after configuration
13. Format hdfs namenode-format
[root@master hadoop] # su hadoop [hadoop@master hadoop] $cd / usr/local/hadoop/ [hadoop@master hadoop] $hdfs namenode-format # must be done under hadoop users at 20:26:12 on 17-07-26 INFO namenode.NameNode: STARTUP_MSG: / * * STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = master/192.168.230.130STARTUP_MSG: args = [- format] STARTUP_MSG: version = 2.7.3. 17 INFO util.ExitUtil 07Accord 26 20:26:15 INFO util.ExitUtil: Exiting with status 0 # status 0 is a success 17-07-26 20:26:15 INFO namenode.NameNode: SHUTDOWN_MSG: / * * SHUTDOWN_MSG: Shutting down NameNode at master/192.168.230.130***/ [hadoop@master hadoop] $
Start the hadoop service
1. Start all services
[hadoop@master dfs] $start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.shStarting namenodes on [master] hadoop@master's password: # enter the password of hadoop on master master: starting namenode, logging to / usr/local/hadoop/logs/hadoop-hadoop-namenode-master.outslave1: starting datanode, logging to / usr/local/hadoop/logs/hadoop-hadoop-datanode-slave1.outslave2: starting datanode Logging to / usr/local/hadoop/logs/hadoop-hadoop-datanode-slave2.outStarting secondary namenodes [0.0.0.0] hadoop@0.0.0.0's password: # enter the password of hadoop on master 0.0.0.0: starting secondarynamenode, logging to / usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-master.outstarting yarn daemonsstarting resourcemanager, logging to / usr/local/hadoop/logs/yarn-hadoop-resourcemanager-master.outslave1: starting nodemanager Logging to / usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave1.outslave2: starting nodemanager, logging to / usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave2.out [hadoop@master dfs] $
two。 Verification
[hadoop@master dfs] process on $jps # master 7491 Jps6820 NameNode7014 SecondaryNameNode7164 ResourceManager [hadoop@master dfs] $[root@slave1 name] # jps # slave1 process 3160 NodeManager3050 DataNode3307 Jps [root@slave1 name] # [root@slave2 name] # jps # slave2 process 3233 DataNode3469 Jps3343 NodeManager [root@slave2 name] #
3. Use browser Management
VI. Summary
1. The hdfs namenode-format was formatted as the root user, causing the / usr/local/hadoop/dfs/data directory permission to be root. When switching to hadoop user startup, I found that NameNode could not start.
two。 Problem analysis log file to find out the cause of the problem can be targeted to solve
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.