In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Hadoop2.4.1 pseudo-distribution mode deployment-wrencai
Time 2014-08-08 14:54:33 blog Park-all essay areas
Original http://www.cnblogs.com/wrencai/p/3899375.html
Hadoop2.4.1 pseudo-distribution mode deployment
(continue the compilation and installation of the previous hadoop2.4.1-src and continue to configure: http://www.cnblogs.com/wrencai/p/3897438.html)
Thank you: http://blog.sina.com.cn/s/blog_5252f6ca0101kb3s.html
Thank you: http://blog.csdn.net/coolwzjcool/article/details/32072157
1. Configure hadoop environment variables
Add the PATH path to the hadoop installation directory at the end of the / etc/profile file
Export HADOOP_PREFIX=/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1
Export PATH=$PATH:$HADOOP_PREFIX/bin
two。 Configure hadoop-related configuration files
Go to the hadoop installation directory here: / opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1
Configure files in etc/hadoop (related files hadoop-env.sh, core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml)
a. Prepare core-site.xml
Fs.default.name
Hdfs://localhost:9000
Dfs.namenode.name.dir
File:/home/hadoop/hadoop-2.4.1/dfs/name
Dfs.datanode.data.dir
File:/home/hadoop/hadoop-2.4.1/dfs/data
Note that the red font hadoop is the account name I set up to configure hadoop2.4.1. It is automatically created by the system under the home directory and can be changed as needed.
b. Prepare hdfs-site.xml
Dfs.replication
one
Dfs.namenode.name.dir
/ home/hadoop/hadoop-2.4.0/dfs/name
Dfs.datanode.data.dir
/ home/hadoop/hadoop-2.4.0/dfs/data
c. Prepare mapred-site.xml
Mapreduce.jobtracker.address
Localhost:9001
d. Configure yarn-site.xml
Mapreduce.framework.name
Yarn
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Password-free login settings for 3.ssh: refer to http://lhflinux.blog.51cto.com/1961662/526122
Ssh links require password authentication, which can be modified by adding system authentication (that is, public key-private key). After modification, switching between systems can avoid password input and ssh authentication.
a. Modify the file: vi / etc/ssh/sshd_config
RSAAuthentication yes enables RSA encryption.
PubkeyAuthentication yes enables public key authentication
AuthorizedKeysFile .ssh / authorized_keys public key location
PasswordAuthentication no refuses to log in with a password
GSSAPIAuthentication no prevents slow login and error reporting problems
ClientAliveInterval 300s timeout automatic exit ClientAliveCountMax 10 maximum number of SSH remote connections allowed
b. Execute under the root root directory:
Ssh-keygen-t rsa-P''"
Enter, enter the password, and then execute: (this machine, as a node of the pseudo-cluster, also needs to write the authentication to authorized. If you do not execute the next sentence, there may be an agent admitted failure to sign using the key error. Refer to http://blog.chinaunix.net/uid-28228356-id-3510267.html))
Cat / root/.ssh/id_rsa.pub > > / root/.ssh/authorized_keys
d. Execute the following command. If you can enter it directly, you will be successful.
[root@localhost] # ssh localhost
Last login:Fri Aug 8 13:44:42 2014 from localhost
4. Run the test hadoop
a. Go to the hadoop2.4.0 directory and execute the following command to format the node information. "shutting down..." appears in the last sentence. There should be no warn or fatal error in the middle. STARTUP_MSG: host = java.net may appear here. For the prompt of UnknownHostException: localhost.localdomain: localhost.localdomain, you can modify it by referring to http://lxy2330.iteye.com/blog/1112806, or temporarily change the host name to localhost through the hostname localhost command.
. / bin/hadoop namenode-format
b. Execute sbin/start-all.sh to start hadoop may not succeed the first time, this can be done by first executing sbin/stop-all.sh and then executing sbin/start-all.sh, and finally using the jps command to view the process
[root@localhost hadoop-2.4.1] #. / sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
Localhost: starting namenode, logging to
/ opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-namenode-localhost.out
Localhost: starting datanode, logging to
/ opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-datanode-localhost.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to
/ opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-secondarynamenode-localhost.out
Starting yarn daemons
Starting resourcemanager, logging to
/ opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/yarn-root-resourcemanager-localhost.out
Localhost: starting nodemanager, logging to
/ opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/yarn-root-nodemanager-localhost.out
[root@localhost hadoop-2.4.1] # ssh localhost
Last login: Fri Aug 8 13:44:41 2014 from localhost
[root@localhost ~] # jps
28186 ResourceManager
28025 SecondaryNameNode
27743 NameNode
28281 NodeManager
29223 Jps
[root@localhost ~] #
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.