In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
I believe many inexperienced people are at a loss about what MooseFS is and how to deploy it. Therefore, this article summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
Analysis of the working principle of MooseFS
MooseFS (hereinafter referred to as MFS) is an open source storage system under Linux officially launched by the Polish company Gemius SA on May 30th, 2008. it is one of the sub-projects of OpenStack open source cloud computing project, known as object storage, and provides strong scalability, high reliability and persistence. It can distribute files on different physical machines, but provides a pool of storage resources with a transparent interface. It also has the characteristics of online expansion, file block storage, no single point of failure, high reading and writing efficiency and so on.
MFS distributed file system consists of metadata server (Master Server), metadata log server (Metalogger Server), data storage server (Chunk Server) and client (Client).
MFS file system structure diagram
(1) metadata server: the core component of the MFS system, which stores the metadata of each file, and is responsible for file reading and writing scheduling, space recovery and data copying between multiple chunk server. Currently, MFS supports only one metadata server, so a single point of failure may occur. To solve this problem, we need to use a server with stable performance as our metadata server, which can reduce the probability of single point of failure.
(2) metadata log server: the backup node of the metadata server downloads the files that save metadata, update log and session information to the local directory from the metadata server according to the specified period. When the metadata server fails, we can get the relevant necessary information from the files of the server to restore the whole system.
In addition, using metadata for backup is a conventional means of log backup, which can not take over the business perfectly in some cases, or result in data loss. This time, the metadata node will be used as a dual hot backup through iSCSI shared disk.
(3) data storage server: responsible for connecting to the metadata management server, following the scheduling of the metadata server, providing storage space, and providing data transmission for the client. MooseFS provides a manual number of backups for each directory. Assuming that the number is n, then when we write files to the system, the system will copy n copies of the shredded file blocks on different chunk server. The increase in the number of backups will not affect the write performance of the system, but it can improve the read performance and availability of the system, which can be said to be a strategy of trading storage capacity for write performance and availability.
(4) client: use mfsmount to connect the data storage server managed on the remote management server to the local directory through the FUSE kernel interface, and then you can use our MFS file system just like using local files.
MFS read and write principle 1.MFS read data process
MFS reading process
Steps for MFS to read files:
The ① MFS client submits a task request to read the file to the system's metadata management server
The ② metadata server retrieves its own data and sends it to the client where the data is stored
After receiving the information returned by the metadata management server, the ③ client sends a data request to the known data storage server.
2.MFS data writing process
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Postfixyum-y install postfix mailxsystemctl start postfix systemctl enable postfix [root log] # c
© 2024 shulou.com SLNews company. All rights reserved.