In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how to configure Hadoop. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
1. Set SSH password-free login
Pay attention to two points:
You should also set up free login for yourself and your own ssh.
You need to set localhost as your own IP in / etc/hosts.
If you still need to enter a password after setting up, it may be a permission problem with .ssh, try the following command
Chown root/ root/.ssh chown root/ root/.ssh/* chmod 700 / root/.ssh chmod 600 / root/.ssh/*
two。 Modify the configuration files in the etc/hadoop directory and sbin directory
Core-site.xml
Fs.defaultFS hdfs://backup01:8020 For namenode listening io.file.buffer.size 4096 hadoop.tmp.dir file:/usr/local/hadoop/tmp
Hdfs-site.xml
Dfs.namenode.name.dir file:/usr/local/hadoop/name dfs.datanode.data.dir file:/usr/local/hadoop/data dfs.replication 1 dfs.webhdfs.enabled true
Yarn-site.xml
Yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address backup01:8032 yarn.resourcemanager.scheduler.address backup01:8030 yarn.resourcemanager.resource-tracker.address backup01:8031 yarn.resourcemanager.admin.address backup01:8033
Mapred-site.xml
Mapred.job.tracker backup01:9001
Hadoop-env.sh
Add the Java path at the beginning of the file
Export JAVA_HOME=/usr/local/jdkexport HADOOP_PID_DIR=/usr/local/hadoop/tmp
Yarn-env.sh
Add the Java path at the beginning of the file
Export JAVA_HOME=/usr/local/jdk
Master (note that 3.x.x does not need to configure the master file)
Use backup01 as secondary namenode
Backup01
Slaves (note that 3.x.x corresponds to workers file)
Backup02
Sbin/yarn-daemon.sh
Add a little code at the beginning
Export YARN_PID_DIR=/usr/local/hadoop/tmp
Additional operations required for the 3.x.x version of Hadoop
Four files, start-dfs.sh, stop-dfs.sh, start-yarn.sh and stop-yarn.sh, need to be modified under the sbin path, otherwise the following error will be thrown when running hadoop:
Attempting to operate on hdfs namenode as rootERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Add the following parameters to the next line at the top of the two start-dfs.sh,stop-dfs.sh files
#! / usr/bin/env bashHDFS_DATANODE_USER=rootHADOOP_SECURE_DN_USER=rootHDFS_NAMENODE_USER=rootHDFS_SECONDARYNAMENODE_USER=root
The following parameters need to be added at the top of start-yarn.sh and stop-yarn.sh:
#! / usr/bin/env bashYARN_RESOURCEMANAGER_USER=rootHADOOP_SECURE_DN_USER=rootYARN_NODEMANAGER_USER=root
3. Enter the following command to format HDFS
Hdfs namenode-format
4. Start Hadoop
$. / bin/start-dfs.sh$./bin/start-yarn.sh
5. Enter the following command to verify that Hadoop starts successfully
Hadoop fs-mkdir / inhadoop fs-ls / Thank you for reading! This is the end of this article on "how to configure Hadoop". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.