Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the use of osd module in ceph-deploy

2025-03-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "what is the use of the osd module in ceph-deploy", the content is simple and clear, and I hope it can help you solve your doubts. Let me lead you to study and learn the article "what is the use of the osd module in ceph-deploy".

Ceph-deploy Source Code Analysis-- osd Module

Ceph-deploy 's osd.py module is used to manage osd daemons, mainly to create and activate OSD.

The format of the osd subcommand is as follows

Ceph-deploy osd [- h] {list,create,prepare,activate}...

List: displaying osd list information

Create: create an OSD, including prepare and activate

Prepare: prepare OSD by formatting / partitioning the disk

Activate: activate the prepared OSD

OSD management

Make function

Priority is 50

The default execution function of the OSD subcommand is osd

@ priority (50) def make (parser): "" Prepare a data disk on remote host. "" Sub_command_help = dedent ("Manage OSDs by preparing a data disk on remote host. For paths, first prepare and then activate: ceph-deploy osd prepare {osd-node-name}: / path/to/osd ceph-deploy osd activate {osd-node-name}: / path/to/osd For disks or journals the `create` command will do prepare and activate for you. "") parser.formatter_class = argparse.RawDescriptionHelpFormatter parser.description = sub_command_help osd_parser = parser.add_subparsers (dest='subcommand') osd_parser.required = True osd_list = osd_parser.add_parser ('list', help='List OSD info from remote host (s)') osd_list.add_argument ('disk', nargs='+' Metavar='HOST:DISK [: JOURNAL]', type=colon_separated, help='remote host to list OSDs from') osd_create = osd_parser.add_parser ('create', help='Create new Ceph OSD daemon by preparing and activating disk') osd_create.add_argument ('-- zap-disk', action='store_true' Help='destroy existing partition table and content for DISK',) osd_create.add_argument ('--fs-type', metavar='FS_TYPE', choices= ['xfs',' btrfs'], default='xfs', help='filesystem to use to format DISK (xfs, btrfs)' ) osd_create.add_argument ('- dmcrypt', action='store_true', help='use dm-crypt on DISK',) osd_create.add_argument ('--dmcrypt-key-dir', metavar='KEYDIR', default='/etc/ceph/dmcrypt-keys', help='directory where dm-crypt keys are stored' ) osd_create.add_argument ('--bluestore', action='store_true', default=None, help='bluestore objectstore',) osd_create.add_argument ('disk', nargs='+', metavar='HOST:DISK [: JOURNAL]', type=colon_separated, help='host and disk to prepare' Osd_prepare = osd_parser.add_parser ('prepare', help='Prepare a disk for use as Ceph OSD by formatting/partitioning disk') osd_prepare.add_argument ('-- zap-disk', action='store_true', help='destroy existing partition table and content for DISK',) osd_prepare.add_argument ('--fs-type' Metavar='FS_TYPE', choices= ['xfs',' btrfs'], default='xfs', help='filesystem to use to format DISK (xfs, btrfs)',) osd_prepare.add_argument ('--dmcrypt', action='store_true', help='use dm-crypt on DISK' ) osd_prepare.add_argument ('- dmcrypt-key-dir', metavar='KEYDIR', default='/etc/ceph/dmcrypt-keys', help='directory where dm-crypt keys are stored',) osd_prepare.add_argument ('--bluestore', action='store_true', default=None, help='bluestore objectstore' ) osd_prepare.add_argument ('disk', nargs='+', metavar='HOST:DISK [: JOURNAL]', type=colon_separated, help='host and disk to prepare',) osd_activate = osd_parser.add_parser ('activate' Help='Start (activate) Ceph OSD from disk that was previously prepared') osd_activate.add_argument ('disk', nargs='+', metavar='HOST:DISK [: JOURNAL]', type=colon_separated, help='host and disk to activate',) parser.set_defaults (func=osd,)

The corresponding functions of osd function and osd subcommand list,create,prepare,activate are osd_list, prepare, prepare, and activate, respectively.

Def osd (args): cfg = conf.ceph.load (args) if args.subcommand = = 'list': osd_list (args, cfg) elif args.subcommand =' prepare': prepare (args, cfg, activate_prepared_disk=False) elif args.subcommand = 'create': prepare (args, cfg, activate_prepared_disk=True) elif args.subcommand = =' activate': activate (args Cfg) else: LOG.error ('subcommand% s not implemented', args.subcommand) sys.exit (1)

OSD list

The command line format is: ceph-deploy osd list [- h] HOST:DISK [: JOURNAL] [HOST:DISK [: JOURNAL]...]

Osd_list function

Execute the command ceph-- cluster=ceph osd tree-- format=json to get OSD information

Execute ceph-disk list command to get disk and partition information

According to the results of the two commands and the file information in the osd directory, assemble and output OSD list data

Def osd_list (args, cfg): monitors = mon.get_mon_initial_members (args, error_on_empty=True, _ cfg=cfg) # get the osd tree from a monitor host mon_host = monitors [0] distro = hosts.get (mon_host, username=args.username, callbacks= [packages.ceph _ is_installed]) # execute ceph-cluster=ceph osd tree-format=json command to get osd information tree = osd_tree (distro.conn Args.cluster) distro.conn.exit () interesting_files = ['active',' magic', 'whoami',' journal_uuid'] for hostname, disk, journal in args.disk: distro = hosts.get (hostname Username=args.username) remote_module = distro.conn.remote_module # get the osd name under the OSD directory / var/run/ceph/osd osds = distro.conn.remote_module.listdir (constants.osd_path) # execute the ceph-disk list command to get disk and partition information ceph_disk_executable = system.executable_path (distro.conn, 'ceph-disk') output, err Exit_code = remoto.process.check (distro.conn, [ceph_disk_executable, 'list',]) # Loop OSD for _ osd in osds: # osd path For example, / var/run/ceph/osd/ceph-0 osd_path = os.path.join (constants.osd_path, _ osd) # journal path journal_path = os.path.join (osd_path, 'journal') # OSD id _ id = int (_ osd.split (' -') [- 1]) # split on dash Get the id osd_name = 'osd.%s'% _ id metadata = {} json_blob = {} # piggy back from ceph-disk and get the mount point # ceph-disk list matches the osd name Get disk device device = get_osd_mount_point (output, osd_name) if device: metadata ['device'] = device # read interesting metadata from files # get active, magic, whoami, journal_uuid file information under OSD for f in interesting_files: osd_f_path = os.path.join (osd_path F) if remote_module.path_exists (osd_f_path): metadata [f] = remote_module.readline (osd_f_path) # do we have a journal path? # get journal path if remote_module.path_exists (journal_path): metadata ['journal path'] = remote_module.get_ Realpath (journal_path) # is this OSD in osd tree? For blob in tree ['nodes']: if blob.get (' id') = = _ id: # matches our OSD json_blob = blob # output OSD information print_osd (distro.conn.logger, hostname, osd_path, json_blob, metadata ) distro.conn.exit ()

Create OSD& to prepare OSD

The command line format for creating OSD is: ceph-deploy osd create [- h] [- zap-disk] [- fs-type FS_TYPE] [- dmcrypt] [- dmcrypt-key-dir KEYDIR] [- bluestore] HOST:DISK [: JOURNAL] [HOST:DISK [: JOURNAL] …]

The command line format for preparing OSD is: ceph-deploy osd prepare [- h] [- zap-disk] [- fs-type FS_TYPE] [- dmcrypt] [- dmcrypt-key-dir KEYDIR] [- bluestore] HOST:DISK [: JOURNAL] [HOST:DISK [: JOURNAL]...]

Prepare function. If the parameter activate_prepared_disk is True, create OSD and prepare OSD for False.

Call the exceeds_max_osds function. If a single host has more than 20 OSD, it will warning.

Call the get_bootstrap_osd_key function to get the ceph.bootstrap-osd.keyring under the current directory

Cyclic disk

Configure write / etc/ceph/ceph.conf

Create and write / var/lib/ceph/bootstrap-osd/ceph.keyring

Call the prepare_disk function to prepare the OSD

Verify the OSD status and write the abnormal state information to warning

Def prepare (args, cfg, activate_prepared_disk): LOG.debug ('Preparing cluster% s disks% slots, args.cluster,' .join (': '.join (x or' 'for x in t) for t in args.disk),) # more than 20 OSD per host Will warning hosts_in_danger = exceeds_max_osds (args) if hosts_in_danger: LOG.warning ('if ``kernel.pid_ max``` is not increased to a high enough value') LOG.warning ('the following hosts will encounter issues:') for host, count in hosts_in_danger.items (): LOG.warning (' Host:% 8s, OSDs:% s% (host) Count)) # get ceph.bootstrap-osd.keyring key = get_bootstrap_osd_key (cluster=args.cluster) bootstrapped = set () errors = 0 for hostname, disk, journal in args.disk: try: if disk is None: raise exc.NeedDiskError (hostname) distro = hosts.get (hostname, username=args.username) in the current directory Callbacks= [packages.ceph _ is_installed] LOG.info ('Distro info:% s% s% sails, distro.name, distro.release Distro.codename) if hostname not in bootstrapped: bootstrapped.add (hostname) LOG.debug ('Deploying osd to% s' Hostname) conf_data = conf.ceph.load_raw (args) # configure write / etc/ceph/ceph.conf distro.conn.remote_module.write_conf (args.cluster, conf_data Args.overwrite_conf) # create and write / var/lib/ceph/bootstrap-osd/ceph.keyring create_osd_keyring (distro.conn, args.cluster, key) LOG.debug ('Preparing host% s disk% s journal% s activate% slots, hostname, disk, journal Activate_prepared_disk) storetype = None if args.bluestore: storetype = 'bluestore' # prepare OSD prepare_disk (distro.conn, cluster=args.cluster, disk=disk, journal=journal, activate_prepared_disk=activate_prepared_disk, init=distro.init Zap=args.zap_disk, fs_type=args.fs_type, dmcrypt=args.dmcrypt, dmcrypt_dir=args.dmcrypt_key_dir, storetype=storetype,) # give the OSD a few seconds to start time.sleep (5) # verify OSD status And write the abnormal state information to warning catch_osd_errors (distro.conn, distro.conn.logger, args) LOG.debug ('Host% s is now ready for osd use.', hostname) distro.conn.exit () except RuntimeError as e: LOG.error (e) errors + = 1 if errors: raise exc.GenericError (' Failed to create% d OSDs'% errors)

Prepare_disk function

Execute the ceph-disk-v prepare command to prepare the OSD

If activate_prepared_disk is True, set the ceph service to boot.

Def prepare_disk (conn, cluster, disk, journal, activate_prepared_disk, init, zap, fs_type, dmcrypt, dmcrypt_dir, storetype): "Run on osd node, prepares a data disk for use. Ceph_disk_executable = system.executable_path (conn, 'ceph-disk') args = [ceph_disk_executable,'-vested, 'prepare' If zap: args.append ('--zap-disk') if dmcrypt: args.append ('--dmcrypt') if dmcrypt_dir is not None: args.append ('--dmcrypt-key-dir') args.append (dmcrypt_dir) if storetype: args.append ('- -'+ storetype) args.extend (['--cluster') Cluster,'--fs-type', fs_type,'- -', disk,]) if journal is not None: args.append (journal) # execute the ceph-disk-v prepare command remoto.process.run (conn, args) # whether to activate Activate and set the ceph service to boot if activate_prepared_disk: # we don't simply run activate here because we don't know # which partition ceph-disk prepare created as the data # volume. Instead, we rely on udev to do the activation and # just give it a kick to ensure it wakes up. We also enable # ceph.target, the other key piece of activate. If init = 'systemd': system.enable_service (conn, "ceph.target") elif init = =' sysvinit': system.enable_service (conn, "ceph")

Activate OSD

The command line format is: ceph-deploy osd activate [- h] HOST:DISK [: JOURNAL] [HOST:DISK [: JOURNAL]...]

Activate function

Execute the ceph-disk-v activate command to activate OSD

Verify the OSD status and write the abnormal state information to warning

Set the ceph service to boot

Def activate (args, cfg): LOG.debug ('Activating cluster% s disks% slots, args.cluster, # join elements of t with':', tweets with''# allow None in elements of t Print as empty '.join (': '.join ((s or') for s int) for t in args.disk),) for hostname, disk, journal in args.disk: distro = hosts.get (hostname, username=args.username, callbacks= [packages.ceph _ is_installed]) LOG.info ('Distro info:% s% s% s' Distro.name, distro.release, distro.codename) LOG.debug ('activating host% s disk% slots, hostname, disk) LOG.debug (' will use init type:% slots, distro.init) ceph_disk_executable = system.executable_path (distro.conn) 'ceph-disk') # execute the ceph-disk-v activate command to activate OSD remoto.process.run (distro.conn, [ceph_disk_executable,'-vault, 'activate','-- mark-init', distro.init) '--mount', disk,],) # give the OSD a few seconds to start time.sleep (5) # verify OSD status And write the abnormal status information to warning catch_osd_errors (distro.conn, distro.conn.logger, args) # set ceph service boot if distro.init = 'systemd': system.enable_service (distro.conn, "ceph.target") elif distro.init = =' sysvinit': system.enable_service (distro.conn, "ceph") distro.conn.exit ()

Manage OSD manually

Take the disk sdb on ceph-231 as an example to create an osd.

Create OSD& to prepare OSD

Prepare OSD

[root@ceph-231] # ceph-disk-v prepare-- zap-disk-- cluster ceph--fs-type xfs-- / dev/sdb

Create one more operation for OSD and set the ceph service to boot.

[root@ceph-231 ~] # systemctl enable ceph.target activate OSD

View init

[root@ceph-231 ~] # cat / proc/1/commsystemd

Activate OSD

[root@ceph-231] # ceph-disk-v activate-- mark-init systemd-- mount / dev/sdb1

Set the ceph service to boot

[root@ceph-231 ~] # systemctl enable ceph.target above is all the content of this article "what's the use of osd Module in ceph-deploy". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report