In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces what is the use of mds and cephx in ceph. It is very detailed and has a certain reference value. Friends who are interested must finish it!
In CEPH, blocks and objects do not use MDS, and in their file system, the metadata server MDS is essential. Ceph MDS provides some basic commands for users based on the POSIX file system, such as ls, find, and so on. MDS (Metadata Server) is introduced into Ceph FS (Ceph File System), which mainly provides metadata for compatible POSIX file systems and is generally mounted as a file system.
For fully distributed systems, data migration and expansion are pain points worth paying attention to, so Metadata Server needs to avoid single point of failure and data bottleneck.
Ceph cephx authentication configuration
Authentication
Note: when updating, it is recommended to comment out the authentication, then upgrade, and activate the authentication once the upgrade is received.
By default, cephx is on, and authentication services require certain computing resources. If your network environment is secure and you do not want to use authentication, you can also turn it off, but it is not recommended to turn it off. If authentication is turned off, you are likely to be tampered with client/server messages by middlemen, which can lead to serious security problems.
Deployment scenarios:
There are two main deployment methods, one is to use the ceph-deploy tool, the most commonly used and simplest way to deploy for the first time, and the other is to use a third-party deployment tool (chef,juju,puppet,etc.). In this case you need to manually execute or configure your deployment tools to boot monitors.
Ceph-deploy
To deploy a cluster with ceph-deploy, you don't need to manually boot monitors or create client.admin users or keyrings. All of these will be created for you when you execute ceph-deploy.
When you execute ceph-deploy new {initial-monitors}, ceph will help you create a monitor keyring and generate a ceph configuration file with authentication settings, which are enabled by default. :
Auth_cluster_required = cephx
Auth_service_required = cephx
Auth_client_required = cephx
When ceph-deploy mon create-initial,ceph is executed, the initialization monitors is booted to retrieve the ceph.client.admin.keyring file containing key for the client.admin user. It also retrieves key rings for ceph-deploy and ceph-disk users to prepare and activate osds and metadata services.
When you execute ceph-deploy admin {node-name} (ceph must be installed before), it pushes the configuration file and ceph.client.admin.keyring file to the / etc/ceph directory of each node. This allows you to perform ceph administrator functions as root users on the command line of the node.
Manual deploy
When you deploy the cluster manually, you need to manually boot monitors and create users and keyrings. There are no details here.
Enable / disable cephx
Using cephx authentication requires you to provide a key for monitors,osd and metadata server. If you just enable or disable cephx, there is no need to repeat the boot process.
Enabling cephx,ceph will look for keyring,/etc/ceph/cluster.name.keyring in the default path. You can use the keyring option in the [global] section of the configuration file ceph.conf to change this address. But it is not recommended.
Execute the following procedure to change cephx from disabled to enabled. If you already have a key, you can skip the step of creating a key.
1. Create the key of the user client.admin and save a copy of the key for the client
Ceph auth get-or-create client.admin mon 'allow' mds' allow 'osd' allow *'- o / etc/ceph/ceph.client.admin.keyring
Note: if / etc/ceph/ceph.client.admin.keyring already exists, do not perform this step again.
two。 Generate a keyring for the monitors cluster and generate a monitors secret key.
Ceph-authtool-create-keyring / tmp/ceph.mon.keyring-gen-key-n mon. -cap mon 'allow *'
3. Copy keyring to each mon data path, such as: copy to mon.a path:
Cp / tmp/ceph.mon.keyring / var/lib/ceph/mon/ceph-a/keyring
4. Generate a key for each osd, the number of the osd when {id}.
Ceph auth get-or-create osd. {id} mon 'allow rwx' osd' allow *'- o / var/lib/ceph/osd/ceph- {id} / keyring
5. Generate a key for each mds. {id} is the number of mds.
Ceph auth get-or-create mds. {id} mon 'allow rwx' osd' allow * 'mds' allow *'- o / var/lib/ceph/mds/ceph- {id} / keyring
6. Enable cephx. Cephx by setting these options under [global].
Auth cluster required = cephx
Auth service require = cephx
Auth client required = cephx
7. Start or restart the ceph cluster.
Disable cephx
It is very simple to disable cephx authentication.
1. Disable cephx and set the following parameters in the [global] section of the configuration file ceph.conf.
Auth cluster required = none
Auth service required = none
Auth client required = none
two。 Start or restart the ceph cluster
Keys
Key is used for user authentication in the cluster. If the cephx authentication system is enabled, key; is required. Otherwise, it is not required. The default save path for a general key is in the / etc/ceph directory. A common way to provide key to administrators or client is to include the keyring file that can be included in the / etc/ceph/ directory (for using ceph-deploy deployment tools). The file format is usually $cluster.client.admin.keyring;. If you put it in the / etc/ceph directory, you no longer need to specify the keyring parameter in the ceph configuration file, and if not, you need to specify the keyring parameter in the ceph configuration file.
Note: make sure that the permissions of the keyring file are appropriate.
You can specify key in the ceph configuration file, or use keyring to specify the path to the key.
Parameters:
The path to the Keyring:keyring file
The value of Key:key
Keyfile:key file path
Process keyring
Administrator users or deployment tools (ceph-deploy) also generate a daemon keyring in the same way that normal users generate keyring. In general, the daemon's keyring is in their data directory. The default Keyring location and capabilities are required for daemon functionality.
Ceph-mon
Location: mondata/keyringCapabilities:mon'allow'cephosdLocation:osd_data/keyring
Capabilities: mon 'allow profile osd' osd' allow *'
Ceph-mds
Location: mdsdata/keyringCapabilities:mds'allow'mon'allowprofilemds'osd'allowrwx'radosgwLocation:rgw_data/keyring
Capabilities: mon 'allow rwx' osd' allow rwx'
The default data path format for daemons is:
/ var/lib/ceph/type/cluster-$id
Signature
In ceph Bobtail and later versions, we tend to use signatures for all messages and use session key to establish initial authentication. However, Ceph Argonaut and earlier versions do not authenticate messages, and message signatures are turned off by default for compatibility. If you are Bobtail or newer version, you can open it.
Ceph provides fine-grained control over authentication, so you can enable or disable signing on client and ceph messages, or enable or disable message signing on messages between ceph processes. These can be configured with the following parameters:
Cephx requite signatures: decides whether to digitally sign messages between client and ceph storage cluster, and between ceph's daemons. The default is false.
Cephx cluster require signatures: decides whether to sign messages between processes in ceph. The default is false.
Cephx service require signature: decides whether to authenticate messages between client and ceph storage cluster. The default is false.
Cephx sign messegs: if the version of ceph supports message signing, ceph will sign all messages. The default is true.
Survival time
Auth service ticket ttl: when ceph storage cluster sends a ticket to client, this option is used to define the survival time (valid time) of the ticket, which defaults to 60-60.
How to remove a node containing mon, osd and mds in a Ceph cluster
The steps are as follows:
1. Remove mon
[root@bgw-os-node153 ~] # ceph mon remove bgw-os-node153
Removed mon.bgw-os-node153 at 10.240.216.153:6789/0, there are now 2 monitors
2. Remove all osd on this node
1), view the osd of this node
[root@bgw-os-node153 ~] # ceph osd tree
-4 1.08 host bgw-os-node153
8 0.27 osd.8 up 1
9 0.27 osd.9 up 1
10 0.27 osd.10 up 1
11 0.27 osd.11 up 1
2) stop the osd process of the above node
[root@bgw-os-node153 ~] # / etc/init.d/ceph stop osd
= osd.10 = =
Stopping Ceph osd.10 on bgw-os-node153...kill 2251...done
= osd.9 = =
Stopping Ceph osd.9 on bgw-os-node153...kill 2023...kill 2023...done
= osd.8 = =
Stopping Ceph osd.8 on bgw-os-node153...kill 1724...kill 1724...done
= osd.11 = =
Stopping Ceph osd.11 on bgw-os-node153...kill 1501...done
3) check the ceph osd status again
[root@bgw-os-node153 ~] # ceph osd tree
-4 1.08 host bgw-os-node153
8 0.27 osd.8 down 1
9 0.27 osd.9 down 1
10 0.27 osd.10 down 1
11 0.27 osd.11 down 1
4) delete all osd
[root@bgw-os-node153 ~] # ceph osd rm 8
Removed osd.8
[root@bgw-os-node153 ~] # ceph osd rm 9
Removed osd.9
[root@bgw-os-node153 ~] # ceph osd rm 10
^ [Aremoved osd.10
[root@bgw-os-node153 ~] # ceph osd rm 11
Removed osd.11
5) delete all crush map of osd
[root@bgw-os-node153 ~] # ceph osd crush rm osd.8
Removed item id 8 name 'osd.8' from crush map
[root@bgw-os-node153 ~] # ceph osd crush rm osd.9
Removed item id 9 name 'osd.9' from crush map
[root@bgw-os-node153 ~] # ceph osd crush rm osd.10
^ [Aremoved item id 10 name 'osd.10' from crush map
[root@bgw-os-node153 ~] # ceph osd crush rm osd.11
Removed item id 11 name 'osd.11' from crush map
6) delete all osd certifications
[root@bgw-os-node153 ~] # ceph auth del osd.8
Updated
[root@bgw-os-node153 ~] # ceph auth del osd.9
Updated
[root@bgw-os-node153 ~] # ceph auth del osd.10
Updated
[root@bgw-os-node153 ~] # ceph auth del osd.11
Updated
7). Delete the crush map of this machine's host in ceph osd tree
[root@bgw-os-node153 ~] # ceph osd crush rm bgw-os-node153
Removed item id-4 name 'bgw-os-node153' from crush map
8) uninstall all hard drives mounted on osd
[root@bgw-os-node153 ~] # umount / var/lib/ceph/osd/ceph-8
[root@bgw-os-node153 ~] # umount / var/lib/ceph/osd/ceph-9
[root@bgw-os-node153 ~] # umount / var/lib/ceph/osd/ceph-10
[root@bgw-os-node153 ~] # umount / var/lib/ceph/osd/ceph-11
3. Remove the mds
1. Directly shut down the mds process of this node
[root@bgw-os-node153 ~] # / etc/init.d/ceph stop mds
= mds.bgw-os-node153 = =
Stopping Ceph mds.bgw-os-node153 on bgw-os-node153...kill 4981...done
[root@bgw-os-node153 ~] #
2. Delete the authentication of this mds
[root@bgw-os-node153 ~] # ceph auth del mds.bgw-os-node153
Updated
These are all the contents of the article "what are mds and cephx in ceph?" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.