In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how to install, configure and basic use of Hadoop, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
6. Hadoop
This topic describes the installation, configuration, and basic use of Hadoop.
Hadoop basic Information
Official website: http://hadoop.apache.org/ official tutorial: http://hadoop.apache.org/docs/current/6.1. Environment preparation # switch to Workspace cd / opt/workspacesmkdir data/hadoop# create Hadoop NameNode directory mkdir-p data/hadoop/hdfs/nn# create Hadoop DataNode directory mkdir-p data/hadoop/hdfs/dn# create Hadoop temporary directory mkdir data/hadoop/tmp# create Hadoop log directory mkdir logs/hadoop
Official tutorial
Http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation6.2. Install wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gztar-zxf hadoop-2.7.2.tar.gzrm-rf hadoop-2.7.2.tar.gzmv hadoop-2.7.2. / frameworks/hadoop6.3. Configuration (pseudo-distributed)
Vi. / frameworks/hadoop/etc/hadoop/hadoop-env.sh
# add JDK directory export JAVA_HOME=/opt/env/java# specify that Hadoop logs are written to the previously defined directory export HADOOP_LOG_DIR=/opt/workspaces/logs/hadoop
Vi. / frameworks/hadoop/etc/hadoop/core-site.xml
Fs.defaultFS hdfs://bd:9000 hadoop.tmp.dir file:/opt/workspaces/data/hadoop/tmp
Hadoop.tmp.dir is the basic configuration that the hadoop file system depends on. For example, the location where namenode and datanode are not specified in hdfs-site.xml is placed in this path by default. Hadoop.tmp.dir is stored under / tmp by default and will be emptied at startup.
Vi. / frameworks/hadoop/etc/hadoop/hdfs-site.xml
Dfs.replication 1 dfs.namenode.name.dir file:/opt/workspaces/data/hadoop/hdfs/nn dfs.datanode.data.dir file:/opt/workspaces/data/hadoop/hdfs/dn dfs.permissions.enabled false
The production environment does not allow dfs.permissions.enabled=false, which may result in illegal modification of HDFS data! 6.4. Initialization
Format NameNode
. / frameworks/hadoop/bin/hdfs namenode-format6.5. Start and stop # start NameNode./frameworks/hadoop/sbin/hadoop-daemon.sh start namenode#, start DataNode./frameworks/hadoop/sbin/hadoop-daemon.sh start datanode#, stop NameNode./frameworks/hadoop/sbin/hadoop-daemon.sh stop namenode#, stop DataNode./frameworks/hadoop/sbin/hadoop-daemon.sh stop datanode
Different start (stop) commands
Start-all.sh starts all services. It is not recommended to start HDFS start-mapred.sh using start-dfs.sh to start MapR.
6.6. Test # View the HDFS file. / frameworks/hadoop/bin/hadoop fs-ls / 6.7. HDFS common operation # upload files to HDFShadoop fs-put localfile / user/hadoop/hadoopfile hdfs://:/hadoop fs-put localfile1 localfile2 / user/hadoop/hadoopdirhadoop fs-put localfile hdfs://nn.example.com/hadoop/hadoopfile# create HDFS directory hadoop fs-mkdir / user/hadoop/dir1 / user/hadoop/dir2hadoop fs-mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir# view HDFS directory Record hadoop fs-ls / user/hadoop/file1# to view HDFS file contents hadoop fs-cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2hadoop fs-cat file:///file3 / user/hadoop/file4# modify HDFS file owner hadoop fs-chown [- R] [OWNER] [: [GROUP]] URI [URI] # modify HDFS file permissions hadoop fs-chmod [- R] URI [URI.] # Get HDFS to local hadoop fs-copyToLocal [- ignorecrc] [- crc] URI 6. 8. common problem
Name node is in safe mode
When Hadoop starts, it first enters the safe mode. The safe mode is mainly to check the validity of the data blocks on each DataNode when the system starts, and copy or delete some data blocks according to the policy. If the block lost by datanode reaches a certain proportion, it will always be in the safe mode state, that is, read-only state. Leave can be forced by the command hadoop dfsadmin-safemode leave command.
These are all the contents of the article "how to install, configure and use Hadoop". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.