In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "the installation process of hadoop2.5.2". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Before installation, think about your own installation order, my installation order
1 determine the number of installed clusters and namenode,datanode allocation, and then modify the corresponding osts configuration file
2 enable ssh password-free login between clusters (in hadoop clusters, namenode nodes need to be able to access datanode nodes without ssh passwords)
3 configure detailed environment, mainly jdk package and hadoop package
This chapter mainly introduces the installation process. For detailed configuration documents, please click "http://my.oschina.net/u/259463/blog/514008".
1 modify / etc/hosts file
192.168.1.100 nameNode
192.168.1.101 dataNode1
192.168.1.102 dataNode2
Add the machines in the cluster to the hosts
2 implement SSH password-free authentication configuration
Note: 1 in hadoop cluster, nameNode node needs to be able to access dataNode node without ssh password.
2 multiple machines need to be operated repeatedly 2), 3), 4) process
1) first, the nameNode node generates the public key
Enter: ssh-keygen-t rsa
Enter is required for three times in the generation process (basically all carriage returns are not required)
1 use default file to save the key
2 type a passphrase if at least 5 characters are established
3 repeat passphrase
2) copy the generated key to the daataNode node (if it is a non-root user, the user's home directory under / home)
Input: cd / root/.ssh
Scp. / id_rsa.pub root@192.168.1.101:/root/.ssh/authorized_keys
In the middle, you are required to enter the password of the corresponding machine.
3) check the permissions of the datanode machine authorized_keys and make sure it is 644 (- rw-r--r--). If not, modify the permissions as follows: chmod 644 authorized_keys
4) Test ssh 192.168.1.101
3 install Jdk
1) this is jdk in the format .bin
Copy jdk-6u45-linux-i586.bin to the / usr/java folder and execute. / jdk-6u45-linux-i586.bin under the / usr/java folder
2) the common installation package is jdk-7-linux-i586.tar.gz
Tar zvxf jdk-7-linux-i586.tar.gz / usr/java
Open / etc/profile and add at the end of the file
Export JAVA_HOME=/usr/java/jdk1.7.0_23
Export PATH=$JAVA_HOME/bin:$PATH
Export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
You can execute source profile (under / etc/ folder) to take effect quickly.
Enter java-version test
4 install hadoop
Hadoop-2.5.2
Unpack the downloaded hadoop package with the jdk installation
Example: tar-zxvf hadoop-2.5.2.tar.gz / home/hadoop
Also add the corresponding environment variables to the environment variables file
Open / etc/profile and add at the end of the file
# set hadoop_env
Export HADOOP_HOME=/home/hadoop/hadoop-2.5.2
Export HADOOP_COMMON_HOME=$HADOOP_HOME
Export HADOOP_HDFS_HOME=$HADOOP_HOME
Export HADOOP_MAPRED_HOME=$HADOOP_HOME
Export HADOOP_YARN_HOME=$HADOOP_HOME
Export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
Export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
Export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
Export HADOOP_OPTS=\ "- Djava.library.path=$HADOOP_HOME/lib\"
Execute source profile (under / etc/ folder) to take effect quickly
5 configuration files required to configure hadoop
For details, please see http://my.oschina.net/u/259463/blog/514008
5 use the scp command to copy the hadoop folder to the same path on all node machines.
Example: scp-r hadoop-2.5.2/ root@dataNode01:/home/hadoop
-r for all files in the folder root is the user name of the replication computer, followed by IP see the above hosts configuration file, followed by the path to the copy
6 confirm the configuration of the firewall, or turn it off directly
/ etc/init.d/iptables status is closed for / etc/init.d/iptables stop other please Baidu
7 start the test
First format the namenode method as:. / bin/hadoop namenode-format
Delete the hadoop.tmp.dir tmp folder in the core-site.xml configuration file if the formatting is wrong due to another problem, or if it is not the first time.
If the output appears (within the basic penultimate lines): / hadoop-2.5.2/hdfs/name has been successfully formatted indicates that it is successful and you need to find the problem with Baidu google if you report an error.
8 then start hdfs yarn
Sbin/start-dfs.sh
Sbin/start-yarn.sh
9 browsers can be accessed after successful startup
Http://192.168.1.100:50070/
The fact that there is no problem with http://192.168.1.100:8088/ means that hadoop and jdk are installed successfully. Then let's do it.
This is the end of the content of "hadoop2.5.2 installation process". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.