In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
1.Ceph cluster environment
Use 3 virtual machines, including one admin node, and three virtual machines assume 3 monitor nodes and 3 osd nodes at the same time
The operating system adopts CentOS Minimal 7 download address: http://124.205.69.134/files/4128000005F9FCB3/mirrors.zju.edu.cn/centos/7.4.1708/isos/x86_64/CentOS-7-x86_64-Minimal-1708.iso
two。 The premise is that all hosts are carried out.
# hostnamectl set-hostname ceph2\\ modify the hostname
# vi / etc/sysconfig/network-scripts/ifcfg-ens32 or nmtui\\ configure IP address
# systemctl restart network\\ restart the network service
\\ due to the installed CentOS Minimal version, the tab key cannot complete the command parameters. It is recommended to execute a command that can be ignored by the old bird.
# yum-y install bash-completion.noarch
# date\\ check the system time to ensure that the time of each system is the same
# echo '192.168.59.131 ceph2' > > / etc/hosts\\ modify the hosts file to add mappings for all servers
# setenforce 0\\ close selinux
# sed-I's bind selinux for selinux. Modify the configuration file to make the shutdown selinux permanent.
# firewall-cmd-zone=public-add-port=6789/tcp-permanent
# firewall-cmd-zone=public-add-port=6800-7100/tcp-permanent\\ add firewall policy
# firewall-cmd-- reload\\ make its firewall policy effective
# ssh-keygen\\ generate SSH key
# ssh-copy-id root@ceph2\\ needs to be copied between servers
3. Start the deployment of ceph-deploy and deploy one of the machines
# vi / etc/yum.repos.d/ceph.repo\\ add ceph yum source, enter the following
[ceph-noarch]
Name=Ceph noarch packages
Baseurl= http://download.ceph.com/rpm-luminous/el7/noarch
Enabled=1
Gpgcheck=1
Type=rpm-md
Gpgkey= https://download.ceph.com/keys/release.asc
# yum update & & reboot\\ change and restart the system
# yum install ceph-deploy-y\\ install ceph-deploy
a. An error occurred here
Downloading packages:
(1ap4): python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch.rpm | 12 kB 00:00:00
(2cap 4): python-backports-1.0-8.el7.x86_64.rpm | 5.8 kB 00:00:02
Ceph-deploy-1.5.38-0.noarch.rp FAILED] 90 kB/s | 298 kB 00:00:04 ETA
Http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm: [Errno-1] Package does not match intended download. Suggestion: run yum-- enablerepo=ceph-noarch clean metadata
Trying other mirror.
(3x4): python-setuptools-0.9.8-7.el7.noarch.rpm | 397 kB 00:00:05
Error downloading packages:
Ceph-deploy-1.5.38-0.noarch: [Errno 256] No more mirrors to try.
The method of treatment is as follows:
# rpm-ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
b. An error occurred here:
Retrieving http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
Warning: / var/tmp/rpm-tmp.gyId2U: Header V4 RSA/SHA256 Signature, key ID 460f3994: NOKEY
Error: Failed dependencies:
Python-distribute is needed by ceph-deploy-1.5.38-0.noarch
Treatment method:
# yum install python-distribute-y
Execute again
# rpm-ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
4. Deploy monitor services
# mkdir ~ / ceph-cluster & & cd ~ / ceph-cluster\\ create a new cluster configuration directory
# ceph-deploy new ceph2 ceph3 ceph4\\ produce 3 files, a Ceph configuration file, a monitor key ring and a log file after deployment
# ls-l
-rw-r--r-- 1 root root 266 Sep 19 16:41 ceph.conf
-rw-r--r-- 1 root root 172037 Sep 19 16:32 ceph-deploy-ceph.log
-rw- 1 root root 73 Sep 19 11:03 ceph.mon.keyring
# ceph-deploy mon create-initial\\ initialize the cluster
5. Install ceph
# ceph-deploy install ceph2 ceph3 ceph4\\ install ceph on ceph2 ceph3 ceph4
a. An error occurred here
[ceph2] [DEBUG] Retrieving https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[ceph2] [WARNIN] warning: / etc/yum.repos.d/ceph.repo created as / etc/yum.repos.d/ceph.repo.rpmnew
[ceph2] [DEBUG] Preparing... # #
[ceph2] [DEBUG] Updating / installing...
[ceph2] [DEBUG] ceph-release-1-1.el7 # #
[ceph2] [WARNIN] ensuring that / etc/yum.repos.d/ceph.repo contains a high priority
[ceph_deploy] [ERROR] RuntimeError: NoSectionError: No section: 'ceph'
Handling method:
# yum remove ceph-release-y
Execute # ceph-deploy install ceph2 ceph3 ceph4 again
6. Create OSD
# ceph-deploy disk list ceph {1, 2, 3}\\ list each server disk
# ceph-deploy-overwrite-conf osd prepare ceph2:sdc:/dev/sdb ceph3:sdc:/dev/sdb ceph4:sdc:/dev/sdb\\ prepare disk sdb as journaldisk and sdc as data disk
# ceph-deploy osd activate ceph2:sdc:/dev/sdb ceph3:sdc:/dev/sdb ceph4:sdc:/dev/sdb\\ activate osd
There is an error here, and the solution has not been found on the Internet. Please give me your advice. The error here did not affect the deployment of ceph. The command has shown that the disk has successfully mount.
[ceph2] [WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device / dev/sdc: Line is truncated:
[ceph2] [ERROR] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy] [ERROR] RuntimeError: Failed to execute command: / usr/sbin/ceph-disk-v activate-- mark-init systemd-- mount / dev/sdc
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
Sda 8:0 0 20G 0 disk
├─ sda1 8:1 0 1G 0 part / boot
└─ sda2 8:2 0 19G 0 part
├─ cl-root 253:0 0 18G 0 lvm /
└─ cl-swap 253:1 0 1G 0 lvm [SWAP]
Sdb 8:16 0 30G 0 disk
└─ sdb1 8:17 0 5G 0 part
Sdc 8:32 0 40G 0 disk
└─ sdc1 8:33 0 40G 0 part / var/lib/ceph/osd/ceph-0
Sr0 11:0 1 680M 0 rom
Rbd0 252:0 0 1G 0 disk / root/rbddir
7. Deployment successful
# ceph-s
Cluster e508bdeb-b986-4ee8-82c6-c25397a5f1eb
Health HEALTH_OK
Monmap e2: 3 mons at {ceph2=192.168.59.131:6789/0,ceph3=192.168.59.132:6789/0,ceph4=192.168.59.133:6789/0}
Election epoch 10, quorum 0,1,2 ceph2,ceph3,ceph4
Osdmap e55: 3 osds: 3 up, 3 in
Flags sortbitwise,require_jewel_osds
Pgmap v13638: 384 pgs, 5 pools, 386 MB data, 125 objects
1250 MB used, 118 GB / 119 GB avail
384 active+clean
Problem solving:
# ceph-deploy osd activate ceph2:sdc:/dev/sdb ceph3:sdc:/dev/sdb ceph4:sdc:/dev/sdb\\ activate osd
There is an error here, and the solution has not been found on the Internet. Please give me your advice. The error here did not affect the deployment of ceph. The command has shown that the disk has successfully mount.
[ceph2] [WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device / dev/sdc: Line is truncated:
[ceph2] [ERROR] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy] [ERROR] RuntimeError: Failed to execute command: / usr/sbin/ceph-disk-v activate-- mark-init systemd-- mount / dev/sdc
Reason: because ceph partitioned the disk, / dev/sdb disk partition is / dev/sdb1
The correct command is:
# ceph-deploy osd activate ceph2:sdc1:/dev/sdb1 ceph3:sdc1:/dev/sdb1 ceph4:sdc1:/dev/sdb1
Signature of the sword casting team:
[director] Twelve Spring and Autumn period, 3483099@qq.com
[Master] Godao is not green, han169@126.com
[developed by Java] Rain egret, 3436911940q.com; Siqi Junhui, qiangzhang1227@163.com; Little Prince, 545106057Jingq.com; Mountain Patrol, 840260821@qq.com
[VS development] Bean point, 2268800211@qq.com
[system test] asked the earth mirror, 847071279 dust and freedom, 695187655@qq.com
[big data] Desert Oasis, caozhipan@126.com; Zhangsan Province, 570417591@qq.com
[network] Night Lone Star, 11297761@qq.com
[system operation] Sanshi, 261453882 ordinary qq.com; ordinary freak, 591169003@qq.com
[disaster recovery backup] autumn rain, 18568921@qq.com
[security] secrecy, you know.
Original author: three Stones
The copyright belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please indicate the source.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.