Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of site.xml in hdfs

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you the "sample analysis of site.xml in hdfs", which is easy to understand and clear. I hope it can help you solve your doubts. Let the editor lead you to study and study the "sample analysis of site.xml in hdfs".

The http access page of namevalueDescriptiondfs.default.chunk.view.size32768namenode displays the size for the contents of each file, which is usually not required. The amount of space reserved for each disk in dfs.datanode.du.reserved1073741824 needs to be set, mainly for non-hdfs files. The default is not reserved. 0 byte dfs.name.dir/opt/data1/hdfs/name

/ opt/data2/hdfs/name

It is generally recommended to keep a copy of the metadata used by / nfs/data/hdfs/nameNN on nfs as the HA solution of 1.00.It is also possible to use the user and group dfs.permissionstrue used by the web tracker page server such as dfs.web.uginobody,nobodyNN,JT on multiple hard disks of one server. | whether the falsedfs permission is enabled, I generally set false to train others to operate the interface through development tools to avoid misoperation. Setting to true sometimes encounters data because permissions cannot access it. Dfs.permissions.supergroupsupergroup sets the hdfs super permission group, which defaults to supergroup, and the user used to start hadoop is usually superuser. Dfs.data.dir/opt/data1/hdfs/data

/ opt/data2/hdfs/data

/ opt/data3/hdfs/data

... The real datanode data storage path can be written to multiple hard disks, and the path permissions of the local folders used by dfs.datanode.data.dir.perm755datanode are separated by commas. The default number of copies of 755dfs.replication3hdfs blocks is 3. In theory, the more copies, the faster the number of runs, but more storage space is required. Rich people can call 5 or 6dfs.replication.max512 sometimes dn temporary failure recovery will cause the data to exceed the default number of backups. The largest number of copies is usually useless and does not have to be written in the configuration file. Minimum number of dfs.replication.min1, the effect is the same as above. Dfs.block.size134217728 the size of each file block, we use 128m, the default is 64m. This calculation requires 128 * 1024 ^ 2. I have met someone who directly wrote 128000000, which is very romantic. Dfs.df.interval60000 disk usage statistics automatic refresh time (in milliseconds). The maximum number of retries for dfs.client.block.write.retries3 block writes, before which failures are not captured. Heartbeat detection interval of dfs.heartbeat.interval3DN. The number of threads expanded after dfs.namenode.handler.count10NN was started in seconds. The maximum bandwidth per second used by dfs.balance.bandwidthPerSec1048576 to do balance is in bytes instead of bitdfs.hosts/opt/hadoop/conf/hosts.allow a hostname list file, where the host is allowed to connect to NN and must write an absolute path. If the file content is empty, all are OK. The basic principles of dfs.hosts.exclude/opt/hadoop/conf/hosts.deny are the same as above, except that here is a list of host names that are prohibited from accessing NN. This is useful for removing DN from the cluster. The maximum number of concurrent objects in dfs.max.objects0dfs, files in HDFS, and directory blocks are all considered to be an object. 0 means that there is no limit to the internal interval for dfs.replication.interval3NN to calculate replicated blocks, and there is usually no need to write to the configuration file. Default is dfs.support.appendtrue | false the new hadoop supports file APPEND operation, which controls whether the file APPEND is allowed, but the default is false. The reason is to append and bug. Dfs.datanode.failed.volumes.tolerated0 can cause DN to crash the maximum number of bad hard drives, the default of 0 is that as long as one hard disk is broken, DN will shutdown. Dfs.secondary.http.address0.0.0.0:50090SNN 's tracker page listening address and port dfs.datanode.address0.0.0.0:50010DN 's service listening port. If port is 0, it will randomly listen on the port, and notify the NNdfs.datanode.http.address0.0.0.0:50075DN 's tracker page listening address and port dfs.datanode.ipc.address0.0.0.0:50020DN 's IPC listening port through heartbeat. | if you write 0, listen on the tracker page listening address and port dfs.https.enabletrue of the number of service threads started by NNdfs.datanode.handler.count3DN via heartbeat on the random port | whether the tracker of falseNN listens on the HTTPS protocol | The default falsedfs.datanode.https.address0.0.0.0:50475DN HTTPS tracker page listening address and port dfs.https.address0.0.0.0:50470NN HTTPS tracker page listening address and port dfs.datanode.max.xcievers2048 are equivalent to the maximum number of open files under linux. This parameter is not found in the document, and it needs to be turned up when an DataXceiver error occurs. The default is more than 256. it is all the content of this article "sample Analysis of site.xml in hdfs". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report