In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Practice environment:
Operating system: Ubuntu 16.04LTS
Hadoop version: Hadoop 2.7.1
1. Configure core-site.xml
Hadoop@dblab:/usr/local/hadoop/etc/hadoop$ vim core-site.xml
Hadoop.tmp.dir
File:/usr/local/hadoop/tmp
Fs.defaultFS
Hdfs://localhost:9000
two。 Configure hdfs-site.xml
Hadoop@dblab:/usr/local/hadoop/etc/hadoop$ vim hdfs-site.xml
Dfs.replication
one
Dfs.namenode.name.dir
File:/usr/local/hadoop/tmp/dfs/name
Dfs.datanode.data.dir
File:/usr/local/hadoop/tmp/dfs/data
3. Perform name node formatting
Hadoop@dblab:/usr/local/hadoop$
Hadoop@dblab:/usr/local/hadoop$. / bin/hdfs namenode-format
Re-format filesystem in Storage Directory / usr/local/hadoop/tmp/dfs/name? (Y or N)
14:23:44 on 19-05-16 INFO namenode.FSImage: Allocated new BlockPoolId: BP-748770776-127.0.0.1-1557987824492
19-05-16 14:23:44 INFO common.Storage: Storage directory / usr/local/hadoop/tmp/dfs/name has been successfully formatted.
14:23:45 on 19-05-16 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid > = 0
19-05-16 14:23:45 INFO util.ExitUtil: Exiting with status 0
19-05-16 14:23:45 INFO namenode.NameNode: SHUTDOWN_MSG:
/ *
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: dblab: dblab: unknown name or service
* * /
Explain the above problems:
Hadoop@dblab:/usr/local/hadoop$ hostname
Dblab
Hadoop@dblab:/usr/local/hadoop$ cat / etc/hosts
127.0.0.1 localhost
127.0.1.1 dblab-VirtualBox
# The following lines are desirable for IPv6 capable hosts
:: 1 ip6-localhost ip6-loopback
Fe00::0 ip6-localnet
Ff00::0 ip6-mcastprefix
Ff02::1 ip6-allnodes
Ff02::2 ip6-allrouters
Hadoop@dblab:/usr/local/hadoop$ sudo vim / etc/hosts
127.0.1.1 dblab
Hadoop@dblab:/usr/local/hadoop$ sudo / etc/init.d/networking restart
4. Start Hadoop
Hadoop@dblab:/usr/local/hadoop$. / sbin/start-dfs.sh
Starting namenodes on [localhost]
Localhost: starting namenode, logging to / usr/local/hadoop/logs/hadoop-hadoop-namenode-dblab.out
Localhost: starting datanode, logging to / usr/local/hadoop/logs/hadoop-hadoop-datanode-dblab.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to / usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-dblab.out
# the above is startup failure
Hadoop@dblab:/usr/local/hadoop$. / sbin/start-dfs.sh
Starting namenodes on [localhost]
Localhost: starting namenode, logging to / usr/local/hadoop/logs/hadoop-hadoop-namenode-dblab.out
Localhost: starting datanode, logging to / usr/local/hadoop/logs/hadoop-hadoop-datanode-dblab.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to / usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-dblab.out
Web management interface of HDFS
5. Run Hadoop pseudo-distributed instance
Hadoop@dblab:/usr/local/hadoop$ cd / usr/local/hadoop
Hadoop@dblab:/usr/local/hadoop$. / bin/hdfs dfs-mkdir-p / user/hadoop
Hadoop@dblab:/usr/local/hadoop$. / bin/hdfs dfs-mkdir input
Hadoop@dblab:/usr/local/hadoop$. / bin/hdfs dfs-put. / etc/hadoop/*.xml input
Hadoop@dblab:/usr/local/hadoop$. / bin/hdfs dfs-ls input
Found 8 items
-rw-r--r-- 1 hadoop supergroup 4436 2019-05-16 14:52 input/capacity-scheduler.xml
-rw-r--r-- 1 hadoop supergroup 965 2019-05-16 14:52 input/core-site.xml
-rw-r--r-- 1 hadoop supergroup 9683 2019-05-16 14:52 input/hadoop-policy.xml
-rw-r--r-- 1 hadoop supergroup 1080 2019-05-16 14:52 input/hdfs-site.xml
-rw-r--r-- 1 hadoop supergroup 620 2019-05-16 14:52 input/httpfs-site.xml
-rw-r--r-- 1 hadoop supergroup 3518 2019-05-16 14:52 input/kms-acls.xml
-rw-r--r-- 1 hadoop supergroup 5511 2019-05-16 14:52 input/kms-site.xml
-rw-r--r-- 1 hadoop supergroup 690 2019-05-16 14:52 input/yarn-site.xml
Hadoop@dblab:/usr/local/hadoop$. / bin/hadoop jar. / share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep input output 'dfs [a Murray z.] +'
Hadoop@dblab:/usr/local/hadoop$. / bin/hdfs dfs-cat output/*
1 dfsadmin
1 dfs.replication
1 dfs.namenode.name.dir
1 dfs.datanode.data.dir
6. Close Hadoop
Hadoop@dblab:/usr/local/hadoop$. / sbin/stop-dfs.sh # close Hadoop
7. Start YARN
$. / sbin/start-dfs.sh
Hadoop@dblab:/usr/local/hadoop$ vim / usr/local/hadoop/etc/hadoop/yarn-site.xml
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Hadoop@dblab:/usr/local/hadoop$. / sbin/start-dfs.sh
Starting namenodes on [localhost]
Localhost: starting namenode, logging to / usr/local/hadoop/logs/hadoop-hadoop-namenode-dblab.out
Localhost: starting datanode, logging to / usr/local/hadoop/logs/hadoop-hadoop-datanode-dblab.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to / usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-dblab.out
Hadoop@dblab:/usr/local/hadoop$. / sbin/start-dfs.sh
Starting namenodes on [localhost]
Localhost: namenode running as process 28914. Stop it first.
Localhost: datanode running as process 29067. Stop it first.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 29261. Stop it first.
Hadoop@dblab:/usr/local/hadoop$ vim. / etc/hadoop/mapred-site.xml
Mapreduce.framework.name
Yarn
Hadoop@dblab:/usr/local/hadoop$ vim. / etc/hadoop/yarn-site.xml
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Hadoop@dblab:/usr/local/hadoop$. / sbin/start-yarn.sh # launch YARN
Starting yarn daemons
Starting resourcemanager, logging to / usr/local/hadoop/logs/yarn-hadoop-resourcemanager-dblab.out
Localhost: starting nodemanager, logging to / usr/local/hadoop/logs/yarn-hadoop-nodemanager-dblab.out
Hadoop@dblab:/usr/local/hadoop$. / sbin/mr-jobhistory-daemon.sh start historyserver # start the history server
Starting historyserver, logging to / usr/local/hadoop/logs/mapred-hadoop-historyserver-dblab.out
Hadoop@dblab:/usr/local/hadoop$ jps # View process
29809 ResourceManager
28034 RunJar
28914 NameNode
30424 Jps
30248 JobHistoryServer
29067 DataNode
29933 NodeManager
29261 SecondaryNameNode
Check the running status of the task through the Web interface
The script to shut down YARN and Hadoop is as follows:
Hadoop@dblab:/usr/local/hadoop$. / sbin/stop-yarn.sh
Stopping yarn daemons
Stopping resourcemanager
Localhost: stopping nodemanager
No proxyserver to stop
Hadoop@dblab:/usr/local/hadoop$. / sbin/mr-jobhistory-daemon.sh stop historyserver
Stopping historyserver
Hadoop@dblab:/usr/local/hadoop$. / sbin/stop-dfs.sh
Stopping namenodes on [localhost]
Localhost: stopping namenode
Localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.