Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deal with exceeds the limit of concurrent xcievers errors in hadoop

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the relevant knowledge of "how to deal with exceeds the limit of concurrent xcievers errors in hadoop". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Dfs.datanode.max.transfer.threads: default 4096

< 2.0之前该参数为dfs.datanode.max.xcievers >

Explanation: Specifies the maximum number of threads to use for transferring data in and out of the DN.

Represents the number of threads responsible for file operations on the datanode. If there are too many files to be processed, and if this parameter is set too low, some files will not be processed, and an exception will be reported.

All file operations in the linux system are bound to a socket, which can be further thought of as a thread. This parameter specifies the number of such threads.

In datanode, there is a special thread group to maintain these threads, and a daemon thread to monitor the volume of this thread group, which is responsible for monitoring whether the number of threads is online.

If an exception is thrown if it exceeds it, you need to increase the dfs.datanode.max.transfer.threads in the hdfs-site.xml file.

Dfs.datanode.max.transfer.threads parameter setting is too small, datanode exception:

ERROR org.apache.hadoop.dfs.DataNode: DatanodeRegistration (10.10.10.53 50010

StorageID=DS-1570581820-10.10.10.53-50010-1224117842339, ipcPort=50020)

: DataXceiver: java.io.IOException: xceiverCount 258 exceeds the limit of concurrent xcievers 256

Note: the number of dfs.datanode.max.transfer.threads cannot be greater than the setting of the number of files opened by the system, that is, the number of nofile in / etc/security/limits.conf.

This is the end of "how to deal with exceeds the limit of concurrent xcievers errors in hadoop". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report