In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to implement the HDFS-Hadoop distributed file system. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.
Comparison of hdfs and traditional file systems:
1. Large files are supported.
2the data block of HDS is independent of the data block of specific disk and is fault tolerant.
Hadoop node classification: management / work node:
Management node: manages the file system tree and all files and directories in the whole tree. If the management node dies, the whole system dies.
Work node: a node that stores specific data and regularly sends its own node's data list to the management node.
Anti-hanging mechanism of hdfs management node: dual-computer hot backup and scheduled backup
Pseudo-distribution mode deployment:
1Magne Hadoop communicates with each node through ssh, so you need to configure ssh and use an empty password
In fact, this is just a matter of communication. You can use ssh, or you can change to other communication modes according to your security needs. You can even rewrite them with java socket.
Configure ssh
$ssh-keygen-t rsa-P''- f ~ / .ssh/id_rsaGenerating public/private rsa key pair.Your identification has been saved in / home/t/.ssh/id_rsa.Your public key has been saved in / home/t/.ssh/id_rsa.pub.The key fingerprint is:5c:f9:27:86:a5:88:97:1b:07:fe:3c:95:90:a8:e8:8f t@ubuntuThe key's randomart image is:+- -[RSA 2048]-+ |. O | | o =. | |. =. | |. O S + *. | |. . * o o | |. . + | | o. | | E. | | +-+ tweeubuntuRose $cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys |
Configuration file
Core-site.xml:
Fs.default.name hdfs://localhost
Hdfs-site.xml
Dfs.replication 1
Mapred-site.xml
Mapred.job.tracker localhost:8021
Note: the conf folder no longer exists in the latest version of hadoop, and the configuration file is written directly in the
$HADOOP_INSTALL/hadoop-2.6.2/etc/hadoop/
Format hdfs file system
Tweak ubuntuvuvaqqhadoopqyr HadoopMuth2.6.2 Zero EtcUnix Hadoopp $hadoop namenode-format
Errors will be reported according to the authoritative book method, and java_home needs to be configured in hadoop-env.sh.
Start the hdfs daemon:
Tweets ubuntuvus $start-dfs.sh
View namenode: http://ip:50070/
Close the hdfs daemon:
Tweets ubuntuvus $stop-dfs.sh
Perform hadoop file output:
Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/home/t/hadoop/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jarbank] SLF4J: Found binding in [jar:file:/home/t/hadoop/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jarbank] SLF4J: Found binding in [jarrimexamples.jarbank] SLF4J: / impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] haddop test file
Basic commands for hadoop file operation
Hadoop fs generic options [- appendToFile...] [- cat [- ignoreCrc]...] [- checksum...] [- chgrp [- R] GROUP PATH...] [- chmod [- R] PATH...] [- chown [- R] [OWNER] [: [GROUP]] PATH...] [- copyFromLocal [- f] [- p] [- l]...] [- copyToLocal [- p] [- ignoreCrc] [- crc]...] [- count [- Q] [- h]...] [- cp [- f] [- p |-p [topax]...] [- createSnapshot []] [- DeleteSnapshot] [- df [- h] [...] [- du [- s] [- h]...] [- expunge] [- get [- p] [- ignoreCrc] [- crc]...] [- getfacl [- R]] [- getfattr [- R] {- n name |-d} [- e en]] [- getmerge [- nl]] [- help [cmd.]] [- ls [- d] [- h] [- R] [...] [- mkdir [- p]...] [- moveFromLocal...] [- moveToLocal] [- mv...] [- put [- f] [- P] [- l]...] [- renameSnapshot] [- rm [- f] [- r |-R] [- skipTrash]...] [- rmdir [--ignore-fail-on-non-empty]...] [- setfacl [- R] [{- b |-k} {- m |-x}] | [--set]] [- setfattr {- N name [- v value] |-x name}] [- setrep [- R] [- w]..] [- stat [format]...] [- tail [- f]] [- test-[defsz]] [- text [- ignoreCrc]...] [- touchz...] [- usage [cmd...]] Generic Options supported are-conf specify an application configuration file-D use value for given property-fs specify a namenode-jt specify a ResourceManager-files specify comma separated files to be copied to the map reduce cluster-libjars specify comma separated jar files to include in the classpath.-archives specify comma separated archives to be unarchived on the compute machines.The general command line syntax isbin/hadoop command [genericOptions] [commandOptions] tasked ubuntu Velcro / about "Hadoopmax ex $hadoop fs-ls" This is the end of the article on how to implement a HDFS-Hadoop distributed file system. Hope that the above content can be helpful to you, so that you can learn more knowledge, if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.