In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Basic environment: Centos7.2
192.168.200.126 ceph2
192.168.200.127 ceph3
192.168.200.129 ceph4
Turn off the firewall and selinux
# setenforce 0
# sed-I's etc/selinux/confi, SELINUXAfen forcingUniverse, SELINUXAfter ledAccording to
# systemctl stop firewalld
# systemctl disable firewalld
Ceph Yum source:
[root@ceph2 ~] # cat / etc/yum.repos.d/ceph.repo
[Ceph-mimic]
Name=Ceph x86_64 packages
Baseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/
Enabled=1
Gpgcheck=0
Ceph-deploy Yum source
[root@ceph2 ~] # cat / etc/yum.repos.d/ceph-deploy.repo
[ceph-deploy]
Name=ceph-deploy
Baseurl= https://download.ceph.com/rpm-mimic/el7/noarch/
Enabled=1
Gpgcheck=0
Time synchronization is performed on all nodes after installing ntp
# yum install-y ntp
# ntpdate pool.ntp.org
Cluster key-free configuration
[root@ceph2 ~] # ssh-keygen
[root@ceph2 ~] # ssh-copy-id ceph2
[root@ceph2 ~] # ssh-copy-id ceph3
[root@ceph2 ~] # ssh-copy-id ceph4
Synchronous configuration
[root@ceph2 ~] # scp / etc/hosts ceph3:/etc/hosts
[root@ceph2 ~] # scp / etc/hosts ceph4:/etc/hosts
[root@ceph2 ~] # scp / etc/yum.repos.d/ceph-deploy.repo ceph3:/etc/yum.repos.d/
[root@ceph2 ~] # scp / etc/yum.repos.d/ceph-deploy.repo ceph4:/etc/yum.repos.d/
Deploy ceph
[root@ceph2 ~] # mkdir / etc/ceph
[root@ceph2 ~] # yum install-y ceph-deploy python-pip
[root@ceph2 ceph] # ceph-deploy new ceph2 ceph3 ceph4
[root@ceph2 ceph] # ls
Ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
[root@ceph2 ceph] # vi ceph.conf
[global]
Fsid = d5dec480-a9df-4833-b740-de3a0ae4c755
Mon_initial_members = ceph2, ceph3, ceph4
Mon_host = 192.168.200.126192.168.200.127192.168.200.129
Auth_cluster_required = cephx
Auth_service_required = cephx
Auth_client_required = cephx
Public network = 192.168.200.0ax 24
Cluster network = 192.168.200.0ax 24
Install ceph components on all nodes:
Yum install-y ceph
Ceph2 node initiates monitor and collects keys
[root@ceph2 ceph] # ceph-deploy mon create-initial
Distribute keys to other nodes
[root@ceph2 ceph] # ceph-deploy admin ceph {1..3}
Configure OSD
[root@ceph2 ceph] # ceph-deploy osd create-- data / dev/sdb ceph2
[root@ceph2 ceph] # ceph-deploy osd create-- data / dev/sdb ceph3
[root@ceph2 ceph] # ceph-deploy osd create-- data / dev/sdb ceph4
[root@ceph2 ceph] # ceph-s
Cluster:
Id: d5dec480-a9df-4833-b740-de3a0ae4c755
Health: HEALTH_WARN
No active mgr
Services:
Mon: 3 daemons, quorum ceph2,ceph3,ceph4
Mgr: no daemons active
Osd: 3 osds: 3 up, 3 in
Data:
Pools: 0 pools, 0 pgs
Objects: 0 objects, 0 B
Usage: 0 B used, 0 B / 0 B avail
Pgs:
If the following warning appears, the hosts in the cluster are not synchronized: the synchronization time is fine.
Health: HEALTH_WARN
Clock skew detected on mon.ceph3
[root@ceph3 ~] # ntpdate pool.ntp.org
[root@ceph2 ceph] # systemctl restart ceph-mon.target
[root@ceph2 ceph] # ceph-s
Cluster:
Id: d5dec480-a9df-4833-b740-de3a0ae4c755
Health: HEALTH_OK
Services:
Mon: 3 daemons, quorum ceph2,ceph3,ceph4
Mgr: ceph2 (active), standbys: ceph4, ceph3
Osd: 3 osds: 3 up, 3 in
Data:
Pools: 0 pools, 0 pgs
Objects: 0 objects, 0 B
Usage: 3.0 GiB used, 57 GiB / 60 GiB avail
Pgs:
Enable dashboard,web management
[root@ceph2 ceph] # vi / etc/ceph/ceph.conf
# add
[mgr]
Mgr_modules = dashboard
[root@ceph2 ceph] # ceph mgr module enable dashboard
[root@ceph2 ceph] # ceph-deploy mgr create ceph2
Generate and install a self-signed certificate
[root@ceph2 ceph] # ceph dashboard create-self-signed-cert
Generate the key and generate two files dashboard.crt dashboard.key
[root@ceph2 ceph] # openssl req-new-nodes-x509-subj "/ O=IT/CN=ceph-mgr-dashboard"-days 3650-keyout dashboard.key-out dashboard.crt-extensions v3_ca
Configure the service address and port. The default port is 8443, which is changed to 7000.
[root@ceph2 ceph] # ceph config set mgr mgr/dashboard/server_addr 192.168.200.126
[root@ceph2 ceph] # ceph config set mgr mgr/dashboard/server_port 7000
[root@ceph2 ceph] # ceph dashboard set-login-credentials admin admin
[root@ceph2 ceph] # systemctl restart ceph-mgr@ceph2.service
[root@ceph2 ceph] # ceph mgr services
{
"dashboard": "https://192.168.200.126:7000/""
}
Synchronize cluster ceph configuration files
[root@ceph2 ceph] # ceph-deploy-- overwrite-conf config push ceph3
[root@ceph2 ceph] # ceph-deploy-- overwrite-conf config push ceph4
Https://192.168.200.126:7000/#/login
Use of block storage
[root@ceph4 ceph] # ceph osd pool create rbd 128
[root@ceph4 ceph] # ceph osd pool get rbd pg_num
Pg_num: 128
[root@ceph4 ceph] # ceph auth add client.rbd mon 'allow r' osd 'allow rwx pool=rbd'
[root@ceph4 ceph] # ceph auth export client.rbd-o ceph.client.rbd.keyring
[root@ceph4 ceph] # rbd create rbd1-- size 1024-- name client.rbd
[root@ceph4 ceph] # rbd ls-p rbd-- name client.rbd
Rbd1
[root@ceph4 ceph] # rbd-image rbd1 info-name client.rbd
Rbd image 'rbd1':
Size 1 GiB in 256 objects
Order 22 (4 MiB objects)
Id: 85d36b8b4567
Block_name_prefix: rbd_data.85d36b8b4567
Format: 2
Features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
Op_features:
Flags:
Create_timestamp: Sun Nov 17 04:33:17 2019
Place group (pg) is the number of storage objects, one disk is 1 OSD, this time it is three sdb, so less than 5 is 128,
[root@ceph4 ceph] # rbd feature disable rbd1 exclusive-lock object-map deep-flatten fast-diff-- name client.rbd
[root@ceph4 ceph] # rbd map-image rbd1-name client.rbd
/ dev/rbd0
[root@ceph4 ceph] # rbd showmapped-- name client.rbd
Id pool image snap device
0 rbd rbd1-/ dev/rbd0
[root@ceph4 ceph] # mkfs.xfs / dev/rbd0
Meta-data=/dev/rbd0 isize=256 agcount=8, agsize=32752 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
Data = bsize=4096 blocks=262016, imaxpct=25
= sunit=16 swidth=16 blks
Naming = version 2 bsize=4096 ascii-ci=0 ftype=0
Log = internal log bsize=4096 blocks=768, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
Realtime = none extsz=4096 blocks=0, rtextents=0
[root@ceph4 ceph] # mount / dev/rbd0 / mnt/
[root@ceph4 ceph] # df-h
Filesystem Size Used Avail Use% Mounted on
Devtmpfs 467M 0 467m 0% / dev
Tmpfs 479m 0 479m 0% / dev/shm
Tmpfs 479m 13m 466m 3% / run
Tmpfs 479m 0 479m 0% / sys/fs/cgroup
/ dev/mapper/centos-root 50G 1.9G 49G 4% /
/ dev/mapper/centos-home 28G 33m 28G 1% / home
/ dev/sda1 497M 139M 359M 28% / boot
Tmpfs 479m 24K 479m 1% / var/lib/ceph/osd/ceph-2
Tmpfs 96m 0 96m 0% / run/user/0
/ dev/rbd0 1021M 33M 989M 4% / mnt
Delete Storage Pool
[root@ceph4 ceph] # umount / dev/rbd0
[root@ceph4 ceph] # rbd unmap / dev/rbd/rbd/rbd1
[root@ceph4 ceph] # ceph osd pool delete rbd rbd-- yes-i-really-really-mean-it
Pool 'rbd' removed
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.