In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly shows you "what is the use of the new module in ceph-deploy", the content is simple and clear, and I hope it can help you solve your doubts. Let me lead you to study and learn the article "what is the use of the new module in ceph-deploy".
Ceph-deploy Source Code Analysis-- new Module
Ceph-deploy 's new.py module is used to start deploying a new cluster and create ceph.conf and ceph.mon.keyring files.
The format of the new subcommand is as follows
Ceph-deploy new [- h] [--no-ssh-copykey] [--fsid FSID] [--cluster-network CLUSTER_NETWORK] [--public-network PUBLIC_NETWORK] MON [MON...] Deploy the cluster
The make function priority is 10, and the default function set by the subcommand is the new function.
Priority (10) def make (parser): "Start deploying a new cluster, and write a CLUSTER.conf and keyring for it." Parser.add_argument ('mon', metavar='MON', nargs='+', help='initial monitor hostname, fqdn, or hostname:fqdn pair', type=arg_validators.Hostname (),) parser.add_argument ('-- no-ssh-copykey', dest='ssh_copykey', action='store_false', default=True Help='do not attempt to copy SSH keys',) parser.add_argument ('- fsid', dest='fsid', help='provide an alternate FSID for ceph.conf generation',) parser.add_argument ('--cluster-network', help='specify the (internal) cluster network', type=arg_validators.Subnet () ) parser.add_argument ('--public-network', help='specify the public network for a cluster', type=arg_validators.Subnet (),) parser.set_defaults (func=new,) deploy a new cluster
The new function starts to deploy the new cluster
Create a ceph.conf file and write [global] fsid, mon_initial_members, mon_host, auth_cluster_required, auth_service_required, auth_client_required. If public_network and cluster_network are included in the parameters, write to the configuration file
Call the new_mon_keyring function to create a ceph.mon.keyring file
Def new (args): if args.ceph_conf: raise RuntimeError ('will not create a Ceph conf file if attemtping to re-use with `--ceph- conf` flag') LOG.debug (' Creating new cluster named% s' Args.cluster) # generate configuration cfg = conf.ceph.CephConf () cfg.add_section ('global') # get the amount fsid in the parameter or automatically generate fsid = args.fsid or uuid.uuid4 () cfg.set (' global', 'fsid', str (fsid)) # if networks were passed in, lets set them in the # global section if args.public_network: cfg.set (' global', 'public network' Str (args.public_network)) if args.cluster_network: cfg.set ('global',' cluster network', str (args.cluster_network)) # mon node mon_initial_members = [] # mon host mon_host = [] # Loop host for (name) Host) in mon_hosts (args.mon): # Try to ensure we can ssh in properly before anything else # ssh key copy if args.ssh_copykey: ssh_copy_keys (host, args.username) # Now get the non-local IPs from the remote node # Connect to the remote host distro = hosts.get (host Username=args.username) # get the IP address of the host remote_ips = net.ip_addresses (distro.conn) # custom cluster names on sysvinit hosts won't work if distro.init = = 'sysvinit' and args.cluster! =' ceph': LOG.error ('custom cluster names are not supported on sysvinit hosts') raise exc.ClusterNameError (' host% s does not support Custom cluster names'% host) distro.conn.exit () # Validate subnets if we received any if args.public_network or args.cluster_network: # verify IP address validate_host_ip (remote_ips [args.public_network, args.cluster_network]) # Pick the IP that matches the public cluster (if we were told to do # so) otherwise pick the first, non-local IP LOG.debug ('Resolving host% slots, host) if args.public_network: ip = get_public_network_ip (remote_ips Args.public_network) else: ip = net.get_nonlocal_ip (host) LOG.debug ('Monitor% s at% slots, name, ip) mon_initial_members.append (name) try: socket.inet_pton (socket.AF_INET6, ip) mon_host.append ("[" + ip + "]") LOG.info (' Monitors are IPv6 Binding Messenger traffic on IPv6') cfg.set ('global',' ms bind ipv6', 'true') except socket.error: mon_host.append (ip) LOG.debug (' Monitor initial members are% slots, mon_initial_members) LOG.debug ('Monitor addrs are% slots, mon_host) # mon_initial_members if there are more than one Separate cfg.set ('global',' mon initial members',', '.join (mon_initial_members)) # no spaces here, see http://tracker.newdream.net/issues/3145 # mon_host with a space in the middle There is no space cfg.set ('global',' mon host',', '.join (mon_host)) # override undesirable defaults, needed until bobtail # http://tracker.ceph.com/issues/6788 cfg.set (' global', 'auth cluster required',' cephx') cfg.set ('global',' auth service required', 'cephx') cfg.set (' global', 'auth client required') 'cephx') path =' {name} .conf '.format (name=args.cluster,) # create mon keyring new_mon_keyring (args) LOG.debug (' Writing initial config to% s.conf, path) tmp ='% s.tmp'% path with open (tmp 'w') as f: # Save ceph configuration file cfg.write (f) try: os.rename (tmp, path) except OSError as e: if e.errno = = errno.EEXIST: raise exc.ClusterExistsError (path) else: raise
Note:
If there are multiple mon_initial_members, it is separated by a space.
If there are multiple mon_host, there is no space in the middle.
Create a ceph.mon.keyring file
The new_mon_keyring function creates a ceph.mon.keyring file
Def new_mon_keyring (args): LOG.debug ('Creating a random mon key...') Mon_keyring ='[mon.]\ nkey =% s\ ncaps mon = allow *\ n'% generate_auth_key () keypath ='{name} .mon.keyring '.format (name=args.cluster,) oldmask = os.umask (0o77) LOG.debug (' Writing monitor keyring to% s.keyring, keypath) try: tmp ='% s.tmp'% keypath with open (tmp,'w') 0o600) as f: f.write (mon_keyring) try: os.rename (tmp, keypath) except OSError as e: if e.errno = = errno.EEXIST: raise exc.ClusterExistsError (keypath) else: raise finally: os.umask (oldmask) manually deploy the cluster
Take ceph-deploy deployment cluster: ceph-deploy new ceph-231 as an example, the corresponding manual operation.
Get the ip address
Execute the following command to obtain the IP address 192.168.217.231 through the regular expression
[root@ceph-231 ceph-cluster] # / usr/sbin/ip link show1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: eth0: mtu 1500 qdisc pfifo_fast master ovs-system state UP mode DEFAULT qlen 1000 link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff3: ovs-system: mtu 1500 qdisc noop state DOWN mode DEFAULT Link/ether 86:f4:14:e3:1b:b2 brd ff:ff:ff:ff:ff:ff4: xenbr0: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff: FF [root @ ceph-231 ceph-cluster] # / usr/sbin/ip addr show1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00mtu 00000000000000000000000000 brd 00000000: 00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: eth0: mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000 link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff3: ovs-system: mtu 1500 qdisc noop state DOWN link/ether 86:f4:14:e3:1b:b2 brd ff:ff:ff:ff:ff:ff4: xenbr0: mtu 1500 qdisc noqueue State UNKNOWN link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff inet 192.168.217.231 Plus 24 brd 192.168.217.255 scope global xenbr0 valid_lft forever preferred_lft forever create ceph.confession [root @ ceph-231 ceph-cluster] # vi ceph.confglobal] fsid = a3b9b0aa-01ab-4e1b-bba3-6f5317b0795bmon_initial_members = ceph-231mon_host = 192.168.217.231auth_cluster_required = cephxauth _ service_required = cephxauth_client_required = cephxpublic_network = 192.168.217.231
Create ceph.mon.keyring
Can be generated through the ceph-authtool command
[root@ceph-231 ceph-cluster] # ceph-authtool-- create-keyring / tmp/ceph.mon.keyring-- gen-key-n mon. -- cap mon 'allow *' creating / tmp/ ceph.mon.keyring [root @ ceph-231 ~] # cat / tmp/ceph.mon.keyring [mon.] key = AQCzxEhZC7tICxAAuHK5GipD96enMuhv82CCLg== caps mon = "allow *"
Copy / tmp/ceph.mon.keyring content to ceph.mon.keyring
[root@ceph-231 ceph-cluster] # vi ceph.mon.keyring [mon.] key = AQCzxEhZC7tICxAAuHK5GipD96enMuhv82CCLg==caps mon = allow these are all the contents of the article "what's the use of the new module in ceph-deploy". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.