In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Deploy a metadata server
Each CephFS file system requires at least one MDS, and cluster maintainers usually use their automated deployment tools to start the required MDS servers as needed. It is recommended to use Rook and ansible (through the ceph-ansible script) to do this. For clarity, we still use the systemd command here
Hardware configuration of MDS
The current version of MDS is single-threaded, and most MDS activities require CPU, including responding to client requests. At the strongest client load, MDS uses about two to three CPU cores, due to the cooperation of other miscellaneous maintenance threads
Even so, it is recommended that the MDS server be fully equipped with advanced CPU with sufficient cores, and development is under way to make better use of the CPU kernel available in MDS. It is expected that in future Ceph releases, MDS servers will improve performance by leveraging more kernels.
Another aspect of MDS performance is that the RAM,MDS available for caching must manage distributed collaborative metadata caching between all clients and other active MDS. Therefore, sufficient RAM must be provided for MDS to enable faster metadata access and mutation. The default MDS cache size is 4GB. It is recommended that you provide MDS with at least 8GB RAM to support this cache
Typically, MDS serving large client clusters (1000 or more) will use at least 64GB's cache. MDS with large caches is not well explored in the largest known cluster in the community. Such a large cache management can have a negative impact on performance in surprising ways, so that the expected workload can be analyzed to determine the appropriate RAM
In bare metal clusters, the best practice is to configure MDS servers with as much hardware as possible. Even if a single MDS daemon cannot fully utilize the hardware, you may want to start more active MDS daemons on the same node in the future to make full use of the available kernel and memory. In addition, it is clear from the workload on the cluster that using multiple active MDS on the same node can improve performance, rather than overconfiguring a single MDS
Finally, note that CephFS is a highly available file system that supports standby MDS for fast failover. In order to get real benefits from deploying the standby database, it is usually necessary to distribute MDS daemons between at least two nodes in the cluster. Otherwise, a hardware failure on a single node may make the file system unavailable
Add MDS
1: create a mds data directory / var/lib/ceph/mds/ceph-$ {id} that the daemon only uses to store its keys
2: if you use CephX, create an authentication key
Ceph auth get-or-create mds.$ {id} mon 'profile mds' mgr' profile mds' mds' allow 'osd' allow'> / var/lib/ceph/mds/ceph-$ {id} / keyring
3: start the service
Systemctl start ceph-mds@$ {id}
4: check the service status. Normally, it should be as follows
Mds: ${id}: 1 {id ${id} = up:active} 2 up:standby
Remove MDS
1: (optional) create a new MDS to be removed from the MDS. If there is no alternative MDS to take over after deleting the MDS, the file system will not be available to the client. If you do not want to do so, consider adding a metadata server before deleting the metadata server to be offline
2: stop the MDS service
Systemctl stop ceph-mds@$ {id}
MDS automatically notifies mon that it is closed, which enables mon to instantly switch the failure to an available standby database, if it exists. No administrative commands are required to achieve this failover, for example, by using ceph mds fail mds.$ {id}
3: delete the / var/lib/ceph/mds/ceph-$ {id} directory
Rm-rf / var/lib/ceph/mds/ceph-$ {id}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.