In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
1. Introduction of Ceph cluster deployment 1.1 installation environment
First, let's introduce the method of Ceph installation and deployment, which is provided by the Ceph community:
Ceph-deploy, a cluster automation deployment tool, has been used for a long time, mature and stable, and is integrated by many automation tools. It can be used for production deployment cephadm. Newer cluster automation deployment tools support adding nodes through graphical interface or command line interface. Currently, it is not recommended to be used in production environment manual, manual deployment, step-by-step deployment of Ceph clusters, supporting more customization and understanding of deployment details. Installation is difficult.
We use mature and simple ceph-deploy to implement the deployment of Ceph clusters. First, take a look at the architecture of ceph-deploy:
Admin-node, which needs an installation management node, which centrally controls the installation mon,monitor node of the ceph cluster, that is, the monitoring management node of the Ceph, undertakes the important management tasks of the Ceph cluster, generally requires 3 or 5 nodes osd,OSD, namely Object Storage Daemon, and is actually responsible for the data storage node.
The installation environment completes the deployment of the Ceph cluster with three nodes. The following is the information about the installation and deployment of each cluster:
Hardware environment: Tencent Cloud CVM,1core+2G+50G system disk + 50G data disk operating system: CentOS Linux release 7.6.1810 (Core) Software version: Mimic 13.2.8 deployment version: ceph-deploy 2.0.1 Node name role description IP address Note node-1admin-node,monitor,OSD10.254.100.101 assumes the role of ceph-deploy installation and deployment admin-node
two。 Act as a Ceph Monitor node
3. Act as Ceph OSD node, including a 50g disk node-2OSD10.254.100.102 as Ceph OSD data storage node, including a 50G disk node-3OSD10.254.100.103 as Ceph OSD data storage node, including a 50G disk 1.2 prerequisite environment preparation
You need to deploy the environment in advance before installing Ceph. Refer to the above figure for deployment. It is recommended to create a new user for installation and deployment during official installation. This document implements cluster installation as root directly. Note: the following operations need to be performed on all nodes except ssh password-free login.
1. Set the hostname, taking node-1 as an example
[root@node-1 ~] # hostnamectl set-hostname node-1 [root@node-1 ~] # hostnamectl status Static hostname: node-1 Icon name: computer-vm Chassis: vm Machine ID: 0ea734564f9a4e2881b866b82d679dfc Boot ID: b0bc8b8c9cb541d2a582cdb9e9cf22aa Virtualization: kvm Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-957.27.2.el7.x86_64 Architecture: x86-64
2. Set / etc/hosts file to write node-1 to node-3 information into / etc/hosts file
[root@node-1 ~] # cat / etc/hosts10.254.100.101 node-110.254.100.102 node-210.254.100.103 node-3
3. To set ssh password-free login, you need to generate key on node-1, and then copy the public key to other nodes (including node-1 nodes), as shown below
4. Selinux is turned off by default.
[root@node-1 ~] # setenforce 0 [root@node-1 ~] # getenforce
5. Turn off the iptables firewall, or release the corresponding port: Ceph monitor 6789Universe CPM Cepheus OSD 6800-7300/tcp.
[root@node-1 ~] # systemctl stop iptables [root @ node-1 ~] # systemctl stop firewalld [root @ node-1 ~] # systemctl disable iptables [root @ node-1 ~] # systemctl disable firewalld
6. Configure ntp time synchronization. Ceph is a distributed cluster and is very sensitive to time. If the time is incorrect, it may cause the cluster to collapse. Therefore, it is critical to set ntp synchronization in the Ceph set. It is recommended to use the ntp server synchronization time in the private network. Tencent Cloud CVM will synchronize to the ntp time synchronization in the private network by default, and readers can set it according to their needs.
[root@node-1 ~] # grep ^ server / etc/ntp.conf server ntpupdate.tencentyun.com iBurst [root @ node-1 ~] # [root@node-1 ~] # ntpq-pn remote refid st t when poll reach delay offset jitter====*169.254.0.2 183.239.152 4u 238 1024 377 5.093 4.443 5.145
7. Set the Ceph installation yum source, and select the installation version as octopus
[root@node-1 ~] # cat / etc/yum.repos.d/ceph.repo > [ceph-noarch] > name=Ceph noarch packages > baseurl= https://download.ceph.com/rpm-mimic/el7/noarch> enabled=1 > gpgcheck=1 > type=rpm-md > gpgkey= https://download.ceph.com/keys/release.asc> EOM install EPEL repository [root@node-1] # sudo yum install-y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
8. Install Ceph-deploy. The corresponding version is 2.0.1, which is important: the version of ceph-deploy in the default epel source is 1.5, which is older, which will involve a lot of rpm dependencies and installation problems. Check the corresponding version before installation to make sure it is correct.
[root@node-1 ~] # yum install ceph-deploy-y [root @ node-1 ~] # ceph-deploy-- version2.0.11.3 deploy Ceph cluster
Some cluster initialization configuration files and key are generated during Ceph-deploy deployment, which are also needed for subsequent expansion. Therefore, it is recommended to create a separate directory on admin-node, where subsequent operations can be performed. Take the created ceph-admin-node as an example.
1. Create a Ceph cluster cluster. You can specify cluster-network (internal communication within the cluster) and public-network (external access to the Ceph cluster).
[root@node-1 ceph-admin] # ceph-deploy new\ >-- cluster-network 10.254.100.0 INFO 24\ >-- public-network 10.254.100.0 node-1 # create a cluster [ceph_deploy.conf] [DEBUG] found configuration file at: / root/.cephdepl oy.confi [ceph _ deploy.cli] [INFO] Invoked (2.0.1): / usr/bin/ceph-deploy new-- cluster-network 10.254.100 .0INFO 24-public-network 10.254.100.0 public-network 24 node-1 [ceph_deploy.cli] [INFO] ceph-deploy options: [ceph _ deploy.cli] [INFO] username: one [ceph _ deploy.cli] [INFO] func: [ceph_deploy.cli] [INFO] verbose: false [ceph _ Deploy.cli] [INFO] overwrite_conf: False[ceph _ deploy.cli] [INFO] quiet: False[ceph _ deploy.cli] [INFO] cd_conf: [ceph_deploy.cli] [INFO] cluster: ceph[ceph _ deploy.cli] [INFO] ssh_copykey : truth [ceph _ deploy.cli] [INFO] mon: ['node-1'] [ceph_deploy.cli] [INFO] public_network: 10.254.100.0 / 24 [ceph _ deploy.cli] [INFO] ceph_conf: one [ceph _ deploy.cli] [INFO] cluster_network : 10.254.100.0 / 24 [ceph _ deploy.cli] [INFO] default_release: false [ceph _ deploy.cli] [INFO] fsid: one [ceph _ deploy.new] [DEBUG] Creating new cluster named CEO [ceph _ deploy.new] [INFO] making sure passwordless SSH successes [DEBUG] connected to host: node-1 [node-1] [ DEBUG] detect platform information from remote host [node-1] [DEBUG] detect machine type [node-1] [DEBUG] find the location of an executable [node-1] [INFO] Running command: / usr/sbin/ip link show [node-1] [INFO] Running command: / usr/sbin/ip addr show [node-1] [DEBUG] IP addresses found: [upright 172.17.0.1' Ugg 10.244.0.1mm, upright 10.244.0.0' Ugg 10.254.100.101'] [ceph_deploy.new] [DEBUG] Resolving host node-1 [ceph_deploy.new] [DEBUG] Monitor node-1 at 10.254.100.101 [ceph _ deploy.new] [DEBUG] Monitor initial members are ['node-1'] [ceph_deploy.new] [DEBUG] Monitor addrs are [upright 10.254.100.101'] [ceph_deploy.new] [DEBUG] Creating a random mon key. [ceph _ deploy.new] [ DEBUG] Writing monitor keyring to ceph.mon.keyring...[ceph _ deploy.new] [DEBUG] Writing initial config to ceph.conf...
As can be seen from the above output, ssh key key, ceph.conf configuration file, ceph.mon.keyring authentication management key, configuration cluster network and pubic network will be generated during new initialization of the cluster. At this time, you can see the following content by viewing the files in the directory:
[root@node-1 ceph-admin] # ls-l Total usage 12 root root RW root@node-1 ceph-admin-1 root root 265 March 1 13:04 ceph.conf # configuration file-rw-r--r-- 1 root root 3068 March 1 13:04 ceph-deploy-ceph.log # deployment log file-rw- 1 root root 73 March 1 13:04 ceph.mon.keyring # monitor Certification key [root @ node-1 ceph -admin-node] # cat ceph.conf [global] fsid = cfc3203b-6abb-4957-af1b-e9a2abdfe725public_network = 10.254.100.0 destination 24 # public network and cluster network cluster_network = 10.254.100.0/24mon_initial_members = node-1 # monitor address and hostname mon_host = 10.254.100.101auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephx
2. Install the software related to Ceph deployment, which is usually installed through yum. Because the software package may be installed incorrectly, ceph-deploy provides a tool for install to assist the installation of the software package, ceph-deploy install node-1 node-2 node-3
[root@node-1 ~] # ceph-deploy install node-1 node-2 node-3
3. Initialize the monitor node and execute ceph-deploy mon create-initial for initialization
After initialization, a corresponding keyring file is generated for ceph authentication:
Ceph.client.admin.keyringceph.bootstrap-mgr.keyringceph.bootstrap-osd.keyringceph.bootstrap-mds.keyringceph.bootstrap-rgw.keyringceph.bootstrap-rbd.keyringceph.bootstrap-rbd-mirror.keyring
4. Copy the authentication key to other nodes, so that the ceph command line can interact with the ceph cluster through keyring, ceph-deploy admin node-1 node-2 node-3
At this time, the Ceph cluster has been set up and contains a monitor node. You can view the status of the current ceph cluster through ceph-s. Since there are no OSD nodes at this time, you cannot write data to the cluster and other operations. Here is the output of ceph-s view.
[root@node-1 ceph-admin] # ceph- s cluster: id: 760da58c-0041-4525-a8ac-1118106312de health: HEALTH_OK services: mon: 1 daemons, quorum node-1 mgr: no daemons active osd: 0 osds: 0 up, 0 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
5. There are no OSD nodes in the cluster, so data cannot be stored. Next, add OSD nodes to the cluster. Each node has a 50G vdb disk, which is added to the cluster as an OSD node, such as ceph-deploy osd create node-1-- data / dev/vdb.
As above, the vdb of node-1 has been added to the ceph cluster. Ceph-s can see that an osd is currently added to the cluster. Perform the same method to add the disks on node-2 and node-3 to the cluster.
Ceph-deploy osd create node-2-data / dev/vdbceph-deploy osd create node-3-data / dev/vdb
After the execution, all three OSD have been added to the ceph cluster, and the corresponding three OSD nodes can be seen through ceph-s.
[root@node-1 ceph-admin] # ceph- s cluster: id: 760da58c-0041-4525-a8ac-1118106312de health: HEALTH_WARN no active mgr services: mon: 1 daemons, quorum node-1 mgr: no daemons active osd: 3 osds: 3 up, 3 in # three OSD, the current status is up and in status data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs:
You can also check the osd and crush tree on every other node through ceph osd tree.
[root@node-1] # ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF-1 0.14369 root default-3 0.04790 host node-1 0 hdd 0.04790 osd.0 up 1.00000 1.00000-5 0.04790 host node-2 1 hdd 0.04790 osd.1 up 1.00000 1.00000-7 0.04790 host node-3 2 hdd 0.04790 osd.2 up 1.00000 1.00000
6. At this time, the health status of Ceph is HEALTH_WARN alarm status, and the prompt message is "no active mgr". Therefore, you need to deploy a mgr node, and the version of manager node can only be deployed after luminous (M version is deployed in this environment, so it can be supported). Deploy mgr to node-1 node and execute ceph-deploy mgr create node-1.
At this point, the Ceph cluster is deployed. Automatic deployment of Ceph cluster is completed through ceph-deploy tool, and it will be convenient to add monitor node, osd node and mgr node later.
2.4 Ceph block storage usage
Actual objectives: Ceph clusters create resource pools, create RBD blocks, and use RBD blocks
After deploying the Ceph cluster, how do you store files in the Ceph cluster? Ceph provides three interfaces for users to use, which are:
Rbd, block storage, used in block mode, is usually suitable for combination with virtualization such as KVM, used to provide block storage device object storage, object storage for virtualization, object storage api through radosgw, object storage for users to upload put and get download object files, cephfs text, piece storage, and use ceph as cephfs mount file system.
Let's first introduce the use of Ceph cluster in the way of Ceph RBD. To use Ceph by creating a RBD block file in Ceph cluster, we first need a resource pool pool,pool is a concept of data storage abstraction in Ceph, which is composed of multiple pg (Placegroup) and pgp on the right. You can specify the number of pg when you create it. The size of pg is generally 2 ^ n, so create a pool first.
1. Create a pool, whose name is happylau, containing 128PG/PGP
[root@node-1 ~] # ceph osd pool create happylau 128 128 pool 'happylau' created
You can view the information of pool, such as the pool list of the current cluster-lspools, pg_num and pgp_num, and the number of copies, size size.
View pool list [root@node-1 ~] # ceph osd lspools1 happylau to view the number of pg and pgp [root@node-1 ~] # ceph osd pool get happylau pg_numpg_num: 128 [root@node-1 ~] # ceph osd pool get happylau pgp_numpgp_num: 128View size size. Default is three copies [root@node-1 ~] # ceph osd pool get happylau sizesize: 3
2. Now that pool has been created, you can create RBD blocks in pool, and use the rbd command to create RBD blocks, such as creating a 10G block storage.
[root@node-1] # rbd create-p happylau-- image ceph-rbd-demo.img-- size 10G
As above, you have created a RBD block file of ceph-rbd-demo.img with a size of 10G. You can view the list and details of RBD images through ls and info.
Check the list of RBD images [root@node-1 ~] # rbd- p happylau ls ceph-rbd-demo.img to view the details of RBD. You can see that the image contains 2560 objects, each with an ojbect size of 4m. Object begins with rbd_data.10b96b8b4567 [root@node-1 ~] # rbd- p happylau info ceph-rbd-demo.imgrbd image 'ceph-rbd-demo.img': size 10 GiB in 2560 objects order 22 (4 MiB objects) id: 10b96b8b4567 block_name_prefix: rbd_data.10b96b8b4567 format: 2 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten op_features: flags: create_timestamp: Mon Mar 2 15:32:39 2020
3. RBD storage blocks have been created, how to use them? If it has been combined with a virtualized environment, you can create a virtual machine and write data on disk, but it is not combined with virtualization at this time (it is also difficult to combine, and we will discuss it later). Rbd provides a tool for map, which can map a RBD block to a local block for use, which greatly simplifies the use process. During rbd map, the features of exclusive-lock, object-map, fast-diff and deep-flatten is not supported, so you need to disable first. Otherwise, it will prompt RBD image feature set mismatch to report an error message.
Close the default root@node-1 [root @ node-1 ~] # rbd- p happylau-- image ceph-rbd-demo.img feature disable deep-flatten [root@node-1 ~] # rbd- p happylau-- image ceph-rbd-demo.img feature disable fast-diff [root@node-1 ~] # rbd- p happylau-- image ceph-rbd-demo.img feature disable object-map [root@node-1 ~] # rbd- p happylau-- image ceph-rbd-demo.img feature disable exclusive-lock View Verify featrue information [root@node-1 ~] # rbd- p happylau info ceph-rbd-demo.imgrbd image 'ceph-rbd-demo.img': size 10 GiB in 2560 objects order 22 (4 MiB objects) id: 10b96b8b4567 block_name_prefix: rbd_data.10b96b8b4567 format: 2 features: layering op_features: flags: create_timestamp: Mon Mar 2 15:32:39 2020 map the RBD block locally After map at this time You can see that the RBD block device is mapped to a local / dev/rbd0 device [root@node-1 ~] # rbd map-p happylau-- image ceph-rbd-demo.img/dev/rbd0 [root@node-1 ~] # ls-l / dev/rbd0brw-rw---- 1 root disk 251st, March 2 15:58 / dev/rbd0
4. The RBD block device has been mapped to the local / dev/rbd0 device, so the device can be formatted and used
Through device list, you can view the mapping of current machine RBD block devices [root@node-1 ~] # ls-l / dev/rbd0brw-rw---- 1 root disk 251st, March 2 15:58 / dev/rbd0 this device can be used like a local disk So it can be formatted [root@node-1 ~] # mkfs.xfs / dev/rbd0 meta-data=/dev/rbd0 isize=512 agcount=16, agsize=163840 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0data = bsize=4096 blocks=2621440 Imaxpct=25 = sunit=1024 swidth=1024 blksnaming = version 2 bsize=4096 ascii-ci=0 ftype=1log = internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1realtime = none extsz=4096 blocks=0 Rtextents= 0 [root @ node-1 ~] # blkid / dev/rbd0/dev/rbd0: UUID= "35f63145-0b62-416d-81f2-730c067652a8" TYPE= "xfs" mount disk to the system [root@node-1 ~] # mkdir / mnt/ceph-rbd [root@node-1 ~] # mount / dev/rbd0/ mnt/ceph-rbd/ [root@node-1 ~] # df-h / mnt/ceph-rbd/ file system capacity is available % mount point / dev/rbd0 10G 33m 10G 1% / mnt/ceph-rbd [root@node-1 ~] # cd / mnt/ceph-rbd/ [root@node-1 ceph-rbd] # echo "testfile for ceph rbd" > Summary of installation and use of rbd.log2.5 Ceph
In this paper, a cluster of 1mon nodes + 1mgr nodes + 3 osd nodes is completed through ceph-deploy. Ceph-deploy installation simplifies the deployment of the cluster. I encountered a lot of errors in the installation process (mainly the problem of rpm version, especially the package of ceph-deploy. EPEL defaults to version 1.5, which requires 2.0.1 of the ceph official website, otherwise it will encounter all kinds of problems. If Ceph is not installed for 1 year, it will change a lot. I have to lament the speed of the development of the community.
In addition, it introduces the use of RBD in Ceph, the creation of resource pool pool, the API for creating rbd images, and the use of rbd storage mapping. Through the use of interfaces, it demonstrates the use of RBD block storage in Ceph, such as docking with virtualization, and virtualization products will call APIs to achieve the same function. The use of object storage object storage and cephfs file storage is not covered in the section. Since the related components are not installed, it will be described in the following chapters.
In addition, Ceph is also introduced. In addition, the current cluster has only one monitor node and there is a single point of failure. When the node-1 node fails, the whole cluster will be unavailable. Therefore, it is necessary to deploy a high availability cluster to avoid a single point of failure and ensure the high availability of the business. The next chapter describes the expansion of monitor nodes.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.