In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Core-site.xml is a global configuration
Hdfs-site.xml and mapred-site.xml are local configurations of hdfs and mapred, respectively.
HDFS port
Parameter description default profile example value fs.default.namenamenode RPC interactive port 8020core-site.xmlhdfs://master:8020/dfs.http.addressNameNode web management port 50070hdfs-site.xml0.0.0.0:50070dfs.datanode.addressdatanode control port 50010hdfs-site.xml0.0.0.0:50010dfs.datanode.ipc.addressdatanode RPC server address and port 50020hdfs-site.xml0.0.0.0:50020dfs.datanode.http.addressdatanode HTTP server And port 50075hdfs-site.xml0.0.0.0:50075
MR port
Parameter describes the default profile example value mapred.job.trackerjob tracker interactive port 8021mapred-site.xmlhdfs://master:8021/mapred.job.tracker.http.addressjob tracker web management port 50030mapred-site.xml0.0.0.0:50030mapred.task.tracker.http.addresstask tracker HTTP port 50060mapred-site.xml0.0.0.0:50060
Other port
Parameter description default profile example value dfs.secondary.http.addresssecondary NameNode web management port 50090hdfs-site.xml0.0.0.0:28680
The cluster directory configuration parameter describes the metadata of the default profile example value dfs.name.dirname node. Separated by the, sign, hdfs will copy the metadata redundancy to these directories. Generally, these directories are different block devices, and directories that do not exist will be ignored.
{hadoop.tmp.dir}
/ dfs/name
Hdfs-site.xm/hadoop/hdfs/namedfs.name.edits.dirnode node transaction files are stored in directories separated by, and hdfs will redundant copy transaction files to these directories. Generally, these directories are different block devices, and directories that do not exist will be ignored. The metadata of ${dfs.name.dir} hdfs-site.xm$ {dfs.name.dir} fs.checkpoint.dirsecondary NameNode is separated by, and hdfs will copy metadata redundancy to these directories. Generally, these directories are different block devices, and directories that do not exist will be ignored.
${hadoop.tmp.dir}
/ dfs/namesecondary
Core-site.xml/hadoop/hdfs/namesecondaryfs.checkpoint.edits.dirsecondary NameNode's transaction files are stored in directories separated by, and hdfs will redundant copy the transaction files to these directories ${fs.checkpoint.dir} core-site.xml$ {fs.checkpoint.dir} hadoop.tmp.dir temporary directories, the parent directory of other temporary directories / tmp/hadoop-$ {user.name} core-site.xml/hadoop/tmp/hadoop-$ {user.name} dfs.data.dirdata node data directories Separated by, hdfs will store the data in these directories Generally, these directories are different block devices, and directories that do not exist will be ignored.
${hadoop.tmp.dir}
/ dfs/data
Hdfs-site.xm
/ hadoop/hdfs/data1/data
/ hadoop/hdfs/data2/data
The intermediate data generated by mapred.local.dirMapReduce are stored in directories separated by, and hdfs will store the data in these directories. Generally, these directories are different block devices, and directories that do not exist will be ignored.
${hadoop.tmp.dir}
/ mapred/local
Mapred-site.xml
/ hadoop/hdfs/data1/mapred/local
/ hadoop/hdfs/data2/mapred/local
Control file of mapred.system.dirMapReduce
${hadoop.tmp.dir}
/ mapred/system
Mapred-site.xml/hadoop/hdfs/data1/system other configuration parameters describe the default configuration file example value dfs.support.append supports file append, which mainly supports the number of copies of hbasefalsehdfs-site.xmltruedfs.replication files. If this parameter is not specified at the time of creation, this default value is used as the number of copies 3hdfs-site.xml2.
Classification: hadoop
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.