In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you an example of HDFS file system security analysis, I hope you will gain something after reading this article, let's discuss it together!
I. Overview
In practical business applications, the early security mechanism of Hadoop open source technology framework has been criticized. Specific to the problems of HDFS applications, it mainly includes the following aspects:
1. User-to-server authentication
(1) there is no user authentication on Namenode: users can access HDFS and obtain file namespace information as long as they know the address and port information of NameNode service.
(2) there is no authentication mechanism on Datanode: Datanode does not authenticate the read-in and output, so if the client knows the Blockid, it can access the Block data on the Datanode arbitrarily.
two。 Server-to-server authentication information
(1) Namenode has no authentication mechanism for Datanode: illegal users can pretend to be Datanode to receive the file storage task of Namenode.
2. Security of hadoop
In order to solve the problem of user-to-server authentication, Hadoop added Security authentication mechanism after version 1.0.0. This mechanism adopts Unix-like user management mechanism. File creators and superusers have all permissions to the file, including read and write, while other users have access and no write permission. Specifically, the user / group information connected to the hadoop cluster depends on the client environment, that is, the user name and group name obtained by `whoami` and `whoami` and `whoami` in the client host, without uid and gid. As long as one of the user groups in the user group list is the same as the user group configured in the cluster, they have this group permission.
It is worth noting that the current three major version branches of Hadoop are not all supported, and the specific implementation needs to pay attention to the differences between different versions.
3. Kebores of hadoop
Hadoop's Kebores authentication mechanism, which is used to solve server-to-server authentication, is mainly related to the distributed cluster security of cloud disk backend services, which will be discussed separately. We will not discuss it here.
IV. Security of client files in cloud disk system
The security of HDFS files on the client side of cloud disk system is mainly related to the secure access of users to the HDFS file service cluster, including that a registered user can only access the space belonging to that user and a certain user can only access a specified size of space on the HDFS space. This involves a user management and space management issues, which will not be described in detail here. For problem 1, we can solve the problem by modifying the existing HadoopThriftServer or adding a new service mechanism, that is, when the client logs in, it returns the user's allowed access path on the HDFS, and the access path is detected during the user's operation. Unauthorized paths are filtered automatically and access is not allowed. Aiming at the second problem, a user registration mechanism is provided. According to the ownership group registered by the user, the server calls FSadmin to set the authorization of the user folder.
After reading this article, I believe you have some understanding of "sample Analysis of HDFS File system Security". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.