In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Reference document: https://wiki.deimos.fr/Ceph_:_performance,_reliability_and_scalability_storage_solution
Ceph version selection 1. Release cycle of Ceph version
The latest LTS versions of Ceph are mimic 13.2.x and luminous12.2.x. Related release information can be viewed through release information. Whether it is the LTS version will be indicated in the specific version.
About three or four stable releases are released each year, each with a name (such as' Firefly'), and bug fixes are provided at least until the next stable release.
Other stable versions are LTS (Long Term Stable) and will continue to be updated until two LTS are released. For example, Dumpling retired after the release of Hammer, Firefly retired after the release of Jewel, and so on. The basic principle is that in order to fix defects and port some important functions, the migration of LTS (such as Dumpling) will continue until the next LTS release (Firefly is the LTS after Dumpling). After the next LTS release, it is also possible to migrate bug fixes, depending on whether these issues prevent upgrading to the next LTS (here is an example, Dumpling is still being repaired after Firefly release and continues until Hammer release, mainly to ensure that Dumpling can be smoothly migrated to Firefly).
LTS (long-term stability): until the next two LTS release stable version: until the next stable release release development, or beta version: it will not be ported 2. 5. Version convention
The first Ceph version is 0. 1, going back to January 2008. The version number scheme remained unchanged for years until April 2015, when 0.94.1 (the first revised version of Hammer) was released, in order to avoid 0.99 (and 0.100 or 1.00? We have developed a new strategy.
X.0.z-development version (for early testers and warriors) x.1.z-candidate version (for testing clusters, masters) x.2.z-stable, revised version (for users)
X will start at 9 and stand for Infernalis (I is the ninth letter), so the first development version of our ninth release cycle is 9.0.0; the subsequent development version is 9.0.1, 9.0.2, and so on.
3. Hardware recommendations and system requirements
For more information, please refer to the official documentation:
Hardware requirements description: http://docs.ceph.com/docs/master/start/hardware-recommendations/
System requirements description: http://docs.ceph.com/docs/master/start/os-recommendations/
4. Deployment description
There are two ways to install Ceph: manual deployment and deployment using ceph-deploy tools.
Manual deployment is cumbersome, but it is easier for beginners to understand. The approach of the ceph-deploy deployment tool is suitable for the deployment of large clusters.
The two deployment methods are demonstrated here.
Ceph-12 luminous version rpm package address: https://download.ceph.com/rpm-luminous
Ceph-13 mimic version rpm package address: https://download.ceph.com/rpm-mimic
Deploy a cluster using ceph-deploy
Official document
0. Preparation before deployment
Three CentOS hosts need to be prepared before deployment. I use CentOS7.5 here, and upgrade the system kernel to 4.x long-term supported version. The relevant configuration information is as follows:
Node hostname and IP information local-node-1: 10.0.0.1local-node-2: 10.0.0.2local-node-3: 10.0.0.3
Configure hosts to resolve the hostnames of the three nodes and configure the three nodes in password-free authentication mode.
Turn off the firewall and Selinux
Each CVM node adds at least one disk for ceph storage. In actual production, multiple disks can be used as raid,ceph automatically formatted when disks are added, so there is no need for formatting here.
Ceph requires high time between nodes, so you need to install ntp, synchronize time and configure epel source. 1. All nodes install the dependency package yum install snappy leveldb gdisk python-argparse gperftools-libs-y
Add yum source and import key. Here I use the latest version of mimic:
Rpm-- import 'https://download.ceph.com/keys/release.asc'su-c' rpm-Uvh https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-0.el7.noarch.rpm'
Due to domestic network problems, you can choose to use Aliyun's source here. Modify the repo file as follows:
[Ceph] name=Ceph packages for $basearchbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc[Ceph-noarch]name=Ceph noarch packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc[ceph-source]name=Ceph source packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=https://download.ceph.com/keys/release.asc2. Install the ceph-deploy tool yum install ceph-deploy-y3. 0 only on the deployment node. Deploy Node 1 using ceph-deploy). Create a directory where ceph configuration files are stored: mkdir / opt/ceph-cluster2). Enter the configuration file directory and initialize the node-1 node: [root@local-node-1 ~] # cd / opt/ceph-cluster/ [root@local-node-1 ceph-cluster] # ceph-deploy new local-node-1
= prompt =: if the following error occurs while executing the ceph-deploy command, you need to install pyhon2-pip
# ceph-deploy new local-node-1Traceback (most recent call last): File "/ usr/bin/ceph-deploy", line 18, in from ceph_deploy.cli import main File "/ usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in import pkg_resourcesImportError: No module named pkg_resources# solution, install python2-pipyum install python2-pip*-y
Multiple configuration files are generated in the current directory after successful execution of the command.
[root@local-node-1 ceph-cluster] # lltotal 12 RW ceph.mon.keyring-1 root root 198 Feb 15 15:37 ceph.conf-rw-r--r-- 1 root root 2993 Feb 15 15:37 ceph-deploy-ceph.log-rw- 1 root root 73 Feb 15 15:37 ceph.mon.keyring
If there are multiple different network interfaces (usually ceph clusters are divided into public networks and cluster networks, also known as public network and Cluster network), the two networks will be separated into different network interfaces in the production environment. You can add the following parameters in the [global] area of the ceph.conf configuration file:
Public network = {ip-address} / {netmask} # public network for accessing ceph storage data in the cluster and ceph's own monitoring and control data cluster network = {cluster-network/netmask} # cluster network for synchronous replication of ceph cluster data between ceph osd
If not configured, only the public network (public network) will be used by default, which is strictly prohibited in the production environment.
3)。 Install ceph to each node, and each node executes: yum install ceph ceph-radosgw-y
When the network is normal, you can also use ceph-deploy to deploy in batches, and the actual effect is the same as the above yum installation software:
Ceph-deploy install-release mimic local-node-1 local-node-2 local-node-3
When the domestic network is not good, this is not recommended. After executing this command, we can find that the following work has been done through the foreground output:
Remove the yum source that already exists on the installation node to update the yum source on the host node. This source is the official source. Due to network reasons, the installation may not successfully execute the yum-y install ceph ceph-radosgw command to install ceph4). Check whether each node is installed successfully: [root@local-node-1 ~] # ceph-vceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)
Check the status of the node and find that it is an unhealthy state:
[root@local-node-1] # ceph status2019-02-15 16V 59 auth 28.897 7f8a67b8c700-1 auth: unable to find a keyring on / etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory2019-02-15 16V 59V 28.897 7f8a67b8c700-1 monclient: ERROR: missing keyring, cannot use cephx for authentication [errno 2] error connecting to the cluster5. Creating a monitor on the node-1 node generates several key in the current directory: [root@local-node-1 ceph-cluster] # ceph-deploy mon create-initial6). Distribute key [root @ local-node-1 ceph-cluster] # ceph-deploy admin local-node-1 local-node-2 local-node-37). Create an administrative process (this feature is available only in versions above luminous+ > = 12.x) [root@local-node-1 ceph-cluster] # ceph-deploy mgr create local-node-18) Check the available disks on the node and exclude the system disk # ceph-deploy disk list local-node-1... [local-node-1] [INFO] Disk / dev/sda: 10.7 GB, 10737418240 bytes, 20971520 sectors [local-node-1] [INFO] Disk / dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors9). Format the disk used by the node-1 node for ceph storage
Using the disk zap command will erase partitions and data on the disk
[root@local-node-1 ceph-cluster] # ceph-deploy disk zap local-node-1 / dev/sdb
Create OSD disks and add three disks with three nodes:
[root@local-node-1 ceph-cluster] # ceph-deploy osd create-data / dev/sdb local-node-1 [root@local-node-1 ceph-cluster] # ceph-deploy osd create-data / dev/sdb local-node-2 [root@local-node-1 ceph-cluster] # ceph-deploy osd create-data / dev/sdb local-node-3
= Note: if you use LVM logical volumes to add OSD, you should use the parameter-- data volume_group/lv_name instead of directly using the path of the logical volume.
10)。 Check whether the cluster status is normal [root@local-node-1 ceph-cluster] # ceph healthHEALTH_ OK [root @ local-node-1 ceph-cluster] # ceph- s cluster: id: 6a4812f7-83cb-43e5-abac-f2b8e37db127 health: HEALTH_OK services: mon: 1 daemons, quorum local-node-1 mgr: local-node-1 (active) osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects 0 B usage: 3.0 GiB used, 27 GiB / 30 GiB avail pgs:
Check the node port information:
[root@local-node-1 ceph-cluster] # netstat-lntpActive Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0 lntpActive Internet connections 22 0.0.0 0 * LISTEN 859/sshd tcp 0 0 10.0.0.1 lntpActive Internet connections 6789 0.0.0.0 LISTEN 2893/ceph-mon tcp 00 0.0.0.0 LISTEN 6800 0.0.0.0 LISTEN 3815/ceph-osd tcp 00 0.0.0.0 LISTEN 3815/ceph-osd tcp 0 0 0 LISTEN 3815/ceph-osd tcp 6802 0 0 0 LISTEN 3815/ceph-osd tcp 0 0 0 LISTEN 3815/ceph-osd tcp 0 0 0. 0netstat 6804 0.0.0.0 LISTEN 4049/ceph-mgr tcp6 0: 22: * LISTEN 859/sshd [root@local-node-2 /] # netstat-lntpActive Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 00 0.0.0.0 LISTEN 2439/ceph-osd tcp 22 0.0.0.0 LISTEN 2439/ceph-osd tcp 00 0.0.0.0 LISTEN 2439/ceph-osd tcp 00 0.0.0.0 0.0 6801 0.0.0.0 * LISTEN 2439/ceph-osd tcp 0 0 0.0.0 .0 6802 0.0.0 0 * LISTEN 2439/ceph-osd tcp 0 0 0 0.0 0 0. 0 6803 0.0.0 0 LISTEN 2439/ceph-osd tcp6 0 0:: 22: * LISTEN 860/sshd [root@local-node-3 ~] # netstat-lntpActive Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0 : 22 0.0.0.0 LISTEN 861/sshd tcp 00 0.0.0.0 6800 0.0.0.0 00 0.0.0.0 6801 0.0.0.0 LISTEN 2352/ceph-osd tcp 0 0 0 LISTEN 2352/ceph-osd tcp6 6 802 0 0 0 LISTEN 2352/ceph-osd tcp6 0 0 0 :: 22:: * LISTEN 861/sshd
At this point, the deployment of the ceph basic storage cluster is complete.
Http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#
Expand the cluster
When we have the above basic ceph cluster, we can expand the cluster through ceph-deploy.
In order to achieve the high availability of mon components, we can also deploy mon on node2 and node3 (the number of monitor must be odd, 1meme, 3Power5, etc., so that elections can be held in case of node failure). At the same time, we also need to add Matedata Server (mds) components to node1.
1. Add Metadata Server
To use CephFS, we must install at least one metadata sever and install metadata server by executing the following command:
[root@local-node-1 ceph-cluster] # ceph-deploy mds create local-node-1
According to the output prompt, you can find that the ceph-mds@local-node-1 service has been started.
# netstat-lntp | grep mdstcp 0 0 0.0.0 0 LISTEN 4549/ceph-mds 6805 0.0.0 0. Add Monitors
When adding a second mon or more mon, you must first modify the configuration files of the admin node and the ceph.conf where the mon node will be deployed, and modify the configuration of mon_initial_members, mon_host, and public_network:
[root@local-node-1 ceph-cluster] # cat ceph.conf [global] fsid = 6a4812f7-83cb-43e5-abac-f2b8e37db127mon_initial_members = local-node-1,local-node-2,local-node-3mon_host = 10.0.0.1, 10.0.0.2, 10.0.0.3 authentic clusterships required = cephxauth_service_required = cephxauth_client_required = cephxpublic_network = 10.0.0.0max 24
Distribute the configuration to all nodes in the cluster:
[root@local-node-1 ceph-cluster] # ceph-deploy-- overwrite-conf config push local-node-2 local-node-3
Update the new configuration to all monitor nodes:
Ceph-deploy-overwrite-conf config push local-node-1 local-node-2 local-node-3
Add mon to the local-node-2 and local-node-3 nodes:
Ceph-deploy mon add local-node-2ceph-deploy mon add local-node-3
After adding Monitor, Ceph will automatically start synchronization and form a quorum. You can check the quorum status with the following command:
# ceph quorum_status-- format json-pretty {"election_epoch": 14, "quorum": [0,1,2], "quorum_names": ["local-node-1", "local-node-2", "local-node-3"], "quorum_leader_name": "local-node-1" "monmap": {"epoch": 3, "fsid": "6a4812f7-83cb-43e5-abac-f2b8e37db127", "modified": "2019-02-18 13 13 fsid 39 6a4812f7 00.705952", "created": "2019-02-15 17 17 fsid 38 83cb-43e5-abac-f2b8e37db127 09.329589", "features": {"persistent": ["kraken" "luminous", "mimic", "osdmap-prune"], "optional": []}, "mons": [{rank ": 0," name ":" local-node-1 " "addr": "10.0.0.1 name 6789 local-node-2 0", "public_addr": "10.0.0.1VR 6789Comp0"}, {"rank": 1, "name": "local-node-2", "addr": "10.0.0.2Rod 6789Univer 0" "public_addr": "10.0.0.2VR 6789Comp0"}, {"rank": 2, "name": "local-node-3", "addr": "10.0.0.3PUR 6789ame0" "public_addr": "10.0.0.3 glued 6789 bank 0"}]}}
Note: when your Ceph cluster is running multiple monitor, NTP should be configured on each monitor host, and make sure that the monitor is at the same level of the NTP service.
3. Add Managers
The Ceph Manager daemon runs in active / standby mode. Deploying other Manager (Manager) daemons ensures that if one daemon or host fails, another daemon or host can take over without interrupting service.
Add Manager for the other two nodes:
[root@local-node-1 ceph-cluster] # ceph-deploy mgr create local-node-2 local-node-3
To check the status, three mgr have been added:
[root@local-node-1 ceph-cluster] # ceph- s cluster: id: 6a4812f7-83cb-43e5-abac-f2b8e37db127 health: HEALTH_OK services: mon: 3 daemons, quorum local-node-1,local-node-2,local-node-3 mgr: local-node-1 (active), standbys: local-node-2,local-node-3 osd: 3 osds: 3 up, 3 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 3.0 GiB used 27 GiB / 30 GiB avail pgs: 4. Add RGW instance
To use ceph's Ceph Object Gateway, you also need to deploy an instance of RGW and create a new RGW instance using the following command:
[root@local-node-1 ceph-cluster] # ceph-deploy rgw create local-node-1
By default, RGW listens on port 7480. If you want to modify this default port, you can edit and run the ceph.conf file on the RGW node:
[client] rgw frontends = civetweb port=80
When using IPv6:
[client] rgw frontends = civetweb port= [:]: 80
Use a browser to access this port to get the following information:
# curl 10.0.0.1:7480anonymous stores / retrieves object data
If you want to store object data in a ceph cluster, ceph customers must meet the following conditions:
Set object name specify resource pool
The Ceph client retrieves the latest cluster mapping, and the CRUSH algorithm calculates how to map objects to placement groups, and then calculates how to dynamically assign placement groups to the Ceph OSD daemon. To find the object location, you only need the object name and the pool name. For example:
Ceph osd map {poolname} {object-name} exercise: locate an object
As an exercise, we create an object, specify the name of the object, and specify a file path that contains the object data, and uses the rados put command to specify the storage pool name:
Echo {Test-data} > testfile.txtceph osd pool create mytest 8rados put {object-name} {file-path}-- pool=mytest# example: echo testdata > testfile.txt# ceph osd pool create mytest 8pool 'mytest' createdrados put test-object-1 testfile.txt-- pool=mytest
Verify that ceph stores this object:
# rados-p mytest lstest-object-1
Anchor the object:
Ceph osdmap {pool-name} {object-name} # ceph osdmap mytest test-object-1osdmap e34 pool 'mytest' (5) object' test-object-1'-> pg 5.74dc35e2 (5.2)-> up ([1je 0lle 2], p1) acting ([1je 0je 2], p1) delete objects and storage pools
Delete objects:
Rados rm test-object-1-pool=mytest
Delete a storage pool:
Ceph osd pool rm mytest
Use the above delete command to prompt for confirmation, and use the following command for normal deletion:
# ceph osd pool rm mytest mytest-- yes-i-really-really-mean-it Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
If the above error is reported, you need to modify the ceph.conf configuration file and restart the mon service:
[mon] mon allow pool delete = true
Restart ceph-mon:
Systemctl restart ceph-mon.target
Then perform the deletion to be successful:
[root@local-node-1 ceph-cluster] # ceph osd pool rm mytest mytest-- yes-i-really-really-mean-it pool 'mytest' removed [root@local-node-1 ceph-cluster] # rados dfPOOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR .rgw.root 1.1 KiB 4 0 12 0 0 12 8 KiB 4 4 KiB default.rgw.control 0 B 8 0 24 0 0 0 B 0 0 B default.rgw.log 0 B 175 0 525 0 0 0 10251 9.8 MiB 6834 0 B default.rgw.meta 0 B 0 000 000 0 B 0 Btotal_ objects 187total_used 3.0 GiBtotal_avail 27 GiBtotal_space 30 GiB clear configuration
If you have problems during configuration and want to reconfigure, you can execute the following command to clear the configuration and ceph software:
/ / clear the node ceph software and data. If you run the purge command, you must reinstall cephceph-deploy purge {ceph-node} [{ceph-node}] ceph-deploy purgedata {ceph-node} [{ceph-node}] / / clear the configuration: ceph-deploy forgetkeys// clears the configuration file of the current directory rm ceph.* to confirm the cluster status
If the cluster is in the state of active + clean, the cluster is normal:
[root@local-node-1 ceph-cluster] # ceph- s cluster: id: 6a4812f7-83cb-43e5-abac-f2b8e37db127 health: HEALTH_OK services: mon: 3 daemons, quorum local-node-1,local-node-2,local-node-3 mgr: local-node-1 (active), standbys: local-node-2,local-node-3 osd: 3 osds: 3 up, 3 in rgw: 1 daemon active data: pools: 4 pools, 32 pgs objects: 187 objects 1.1 KiB usage: 3.0 GiB used, 27 GiB / 30 GiB avail pgs: 32 active+clean
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.