In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to install and configure hadoop". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's ideas to study and learn "how to install and configure hadoop".
one。 Configure java home
Since my java system has been installed and version 1.8 meets the hadoop requirements, just point the java home to the directory where you installed it
The first step is to get the installation directory of java
Get the java command path first, and the command path header is the installation directory of java
Ll twice is a soft link, and finally / usr/lib... Find the directory of java, and we just need to copy it to jre. If you have more or less, you will report an error.
Vim / etc/profile # configure java home
#-# properties jdk#----export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jreexport JRE_HOME=$ {JAVA_HOME} / jreexport CLASSPATH=.:$ {JAVA_HOME} / lib/dt.jar: ${JAVA_HOME} / lib/tools.jar:$ {JRE_HOME} / libexport PATH=$ {PATH}: ${JAVA_HOME} / bin:$ {JRE_HOME} / bin II. Download hadoop and configure hadoop home
Http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.1.2/hadoop-3.1.2.tar.gz
# download address
Mkdir / hadoop/
# create a hadoop folder, download the above files to this folder, and extract it
Ln-s hadoop-3.1.2 hadoop.soft
# create a soft connection file
Vim / etc/profile # configure hadoop home
#-# property hadoop#----export HADOOP_HOME=/hadoop/hadoop.softexport PATH=$ {PATH}: ${HADOOP_HOME} / sbin:$ {HADOOP_HOME} / bin
Source / etc/profile
# reload the configuration file
Hadoop version
# execute the hadoop verification command. If the following characters appear, the installation is successful
three。 Configuration
Cd / hadoop/hadoop.soft/etc/hadoop # enter the profile folder
The following files need to be modified
Slaves
Hadoop-env.sh
Yarn-en.sh
Core-site.xml
Hdfs-site.xml
Mapred-site.xml
Yarn-site.xml
1. Modify slaves
Vim slaves
Two child nodes of Slave1Slave2#
two。 Modify the hadoop-env.sh,JAVA_HOME=$JAVA_HOME line, comment it out, add another line, and add content.
Export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre
# path to add java home
3. Modify the yarn-env.sh,JAVA_HOME=$JAVA_HOME line, comment it out, add another line, and add content.
Export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-7.b13.el7.x86_64/jre
4. Modify the core-site.xml to add content in between.
Hadoop.tmp.dir / hadoop/tmp Abase for other temporary directories. Fs.default.name hdfs://Master:9000 # enter the host of the host here
5. Modify the hdfs-site.xml and also add content in between.
Dfs.namenode.secondary.http-address Master:9001 dfs.namenode.name.dir file:/hadoop/dfs/name dfs.datanode.data.dir file:/hadoop/dfs/date dfs.replication 3 Dfs.webhdfs.enabled true
6. Modify the mapred-site.xml file. There is no such file in the configuration file. First, you need to rename the mapred-site.xml.tmporary file to mapred-site.xml.
Mapreduce.framework.name yarn mapreduce.jobhistory.address Master:10020 mapreduce.jobhistory.webapp.address Master:19888
7. Modify yarn-site.xml
Yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname Master
After creating the required folder
Cd / hadoop
Mkdir-pv tmp
Mkdir-pv dfs/ {name,date}
After transferring the entire folder to two child nodes to ensure that the file paths of all nodes are the same, the home of java and hadoop is also configured to be the same, and how does the master node match the child nodes?
Scp-r / hadoop Slave1:/ # copy the entire hadoop folder to Node 1
Scp-r / hadoop Slave2:/
Fourth, format the node and start hadoop
Hdfs namenode-format # formatting nodes
This step, after all the files have been configured, before opening hdfs,star-dfs.sh. Use it only once, and then you don't need to open hadoop each time. Format the name node. If you don't have a formatted name node, start hadoop,start-dfs.sh, and there will be problems later.
Solution:
Close hdfs, delete the tmp,logs file, and reformat namenode
Stop-all.sh # shut down all nodes
Rm-rf tmp
Rm-rf logs
Hdfs namenode-format
Start the node
Su hadoop # switch to hadoop account
Start-all.sh # starts all nodes
Netstat-nltp # View listening port
Access port 9870
I have two nodes online, and the other one I closed the test, if there is only one master node in the bonus circle.
You can log in to the child node and run the following command to start on the child node
Su hadoop
Hadoop-daemon.sh start datanode # start the native node
Four, give a simple command
Hadoop fs-mkdir / 666New folder
Hdfs dfs-put test.txt / 666upload files
Hadoop fs-ls / View the root directory
Hadoop fs-rmr / 666 Delete the directory
Upload a picture
Check it out on the browser.
Download it and have a look
Thank you for your reading, the above is the content of "how to install and configure hadoop", after the study of this article, I believe you have a deeper understanding of how to install and configure hadoop, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.