Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Distributed storage ceph

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Distributed storage ceph preparation: client50, node51, node52, node53 act as clients for virtual machine client50:192.168.4.50 Make it a NTP server Other hosts use 50 NTP / / echo "allow 192.168.4.0 take 24'> / etc/chrony.confnode51:192.168.4.51 plus three 10G hard disk node52:192.168.4.52, three 10G hard disk node53:192.168.4.53 and three 10G hard disk node54:192.168.4.54 to build source: real machine sharing mount / iso/rhcs2.0-rhosp9-20161113-x86_64. Iso / var/ftp/ceph/ var/ftp/ceph/rhceph-2.0-rhel-7-x86_64/MON/ / var/ftp/ceph/rhceph-2.0-rhel-7-x86_64/OSD/ / var/ftp/ceph/rhceph-2.0-rhel-7-x86_64/Tools/cat / etc/hosts / / write hosts file Each host must have this configuration 192.168.4.50 client50 192.168.4.51 node51 192.168.4.52 node52 192.168.4.53 node53# pscp.pssh node51: no password connection client50, node51, node52, node53ssh-keygen-f / root/.ssh/id_rsa-N'/ / non-interactive generation key pair to send the public key to other hosts and themselves, to achieve ssh password-free login to for i in 51 52 53 Do ssh-copy-id 192.168.4.Secreti Done distributed file system distributed file system (Distributed File System) means that the physical storage resources managed by the file system are not necessarily directly connected to the local node, but are connected to the node through the computer network. The design of the distributed file system based on the client / server mode commonly used distributed file system Lustre,Hadoop,FastDFS,Ceph About GlusterFSCeph there are official (paid) and open source Ceph is a distributed file system with high scalability, high availability, high performance features Ceph can provide object storage, block storage, file system storage Ceph can provide PB level storage space (PB-- > TB-- >-- > GB) Software defined Storage (Software Defined Storage) as storage A major development trend of the industry: http://docs.ceph.org/start/introCeph component OSDs: storage device Monitors: cluster monitoring component MDSs: metadata for storing file systems (object storage and block storage do not need this component) metadata: file information, size, permissions, etc. That is, the following information: drwxr-xr-x 2 root root 6 October 11 10:37 / root/a.sh Client: ceph client experiment: use node51 as the deployment host node51:1. Install and deploy the software: yum-y install ceph-deploy / / use ceph-deploy-help to help create a directory for the deployment tool after the installation is complete, and store the key and configuration file mkdir / root/ceph-cluster cd / root/ceph-cluster2. Create a Ceph cluster create a Ceph cluster configuration (all nodes are mon) ceph-deploy new node51 node52 node53 install the Ceph package ceph-deploy install node51 node52 node53 for all nodes initialize the mon (monitor) service for all nodes (/ etchosts hostname resolution is configured on each host) ceph-deploy mon create-initial3. Create all OSD- nodes to prepare disk partitions (take node51 as an example, node52,node53 will also partition) 1) specify which partition mode to use parted / dev/vdb mklabel gpt2) to create a partition with the first 50% of the disk Parted / dev/vdb mkpart primary 1m 50% 3) use the last 50% of the disk to create a partition parted / dev/vdb mkpart primary 50% 100% 4) set the owners and all groups of the two partitions to ceph Give cluster ceph administrative rights chown ceph.ceph / dev/vdb1 chown ceph.ceph / dev/vdb1 echo 'chown ceph.ceph / dev/vdb*' > > / etc/rc.d/rc.local chmod + x / etc/rc.d/rc.local Note: these two partitions are used to log journalism disks of storage servers-initialize cleaning disk data (administrative operations on node51 only) cd / root/ceph-cluster/ / / you must perform operations in this directory ceph-deploy disk zap node51:vdc node51:vdd ceph-deploy disk zap node52:vdc node52:vdd ceph-deploy disk zap node53:vdc node53:vdd- to create OSD storage devices (administrative operations on node51 only) ceph-deploy osd create node51:vdc:/dev/vdb1 node51:vdd:/dev/vdb2 > > Host node51 is now ready for osd use. Ceph-deploy osd create node52:vdc:/dev/vdb1 node52:vdd:/dev/vdb2 > > Host node52 is now ready for osd use. Ceph-deploy osd create node53:vdc:/dev/vdb1 node53:vdd:/dev/vdb2 > > Host node53 is now ready for osd use.

Service View

Node51: services related to ceph ceph-create-keys@node51.service ceph-osd@1.service ceph-mds.target ceph-osd.target ceph-mon@node51.service ceph-radosgw.target ceph-mon.target ceph.target ceph-osd@0.service Node52: services related to ceph ceph-create-keys@node52.service Ceph-osd@2.service ceph-disk@dev-vdd1.service ceph-osd.target ceph-mds.target ceph-radosgw.target ceph-mon@node52.service ceph.target ceph-mon.target Node53: about the service ceph-create-keys@node53.service ceph-osd@4.service ceph-disk@dev-vdd1.service ceph-osd.target related to ceph Ceph-mds.target ceph-radosgw.target ceph-mon@node53.service ceph.target ceph-mon.target deploy ceph Cluster > > install deployment Software ceph-deploy > > create Ceph Cluster > > create OSD Storage Space > > View ceph status Verify block storage stand-alone block devices: CD, disk distributed block storage: ceph, cider Ceph block devices also known as RADOS block devices-RADOS block device:RBD Rbd drivers have been well integrated in the linux kernel Rbd provides enterprise functions. Such as snapshot, COW (Copy Online Write, copy on write) Clone COW when writing to the source file, the old data will be copied to the snapshot file. When the file is deleted or the content of the file is added or decreased, the source file is changed, and the old file is copied to the snapshot. Rbd also supports memory caching, which can greatly improve the performance of the block storage cluster mirror pool size, which is basically stored at 60GB. Create a mirror for node51, node52, node53 and view the storage pool (there is a rbd pool by default) ceph osd lspools 0 rbd, # # if you do not specify a storage pool Default belongs to rbd storage pool rbd create demo-image-- image-feature layering-- size 10G / / the image name is demo-image -- image-feature layering (how to create an image) creates a rbd create rbd/image in the default storage pool of rbd by default-- image-feature layering-- size 10G / / rbd/image specifies to create a view image in the rbd pool rbd info demo-image rbd image' demo-image': size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.1052238e1f29 format: 2 Features: layering flags: delete the image rbd remove rbd/image rbd remove demo-image / / the method provided only in case of error Do not actually execute the reduced capacity / / command to explain that resetting the size to 1G mirrored image allows shrinking rbd resize-- size 1G image-- allow-shrink expanding capacity / / expanding capacity to 2G rbd resize-- size 2G image cluster access via rbd

1.node51 is used natively to map mirrors to local disks

Rbd map demo-image lsblk / / View the local disk rbd0 251Vol 010G 0 disk

two。 Partition, format (partition name is / dev/rbd0p1), mount, just like local disk

3. The mount needs to be unmounted from the local machine before removing the image from the local disk

Rbd unmap demo-image

Out-of-cluster client clinet50: accessed through rbd

1. Install ceph-common package yum-y install ceph-common.x86_64 2. Copy the configuration file (indicating the location where the cluster is stored) scp 192.168.4.51:/etc/ceph/ceph.conf / etc/ceph/ 3. Copy the connection key (get the permission to connect and use the cluster) scp 192.168.4.51:/etc/ceph/ceph.client.admin.keyring / etc/ceph/ 4. View the cluster image `rbd list demo-image Image `5. The mirror maps to the local disk rbd map image lsblk 6. Displays the local mapping rbd showmapped id pool image snap device 0 rbd image-/ dev/rbd0 7. Partition, format (partition name is / dev/rbd0p1), mount, just like the local disk 8. Undo the disk mapping, remove / / remove the mirror from the local disk before unmounting the rbd unmap demo-image rbd unmap / dev/rbd/rbd/demo-image / / two methods to create the mirror snapshot using COW technology, the snapshot speed of big data will be very fast when COW writes to the source file The old data will be copied to the snapshot file Snapshot: save all the information at a certain time for later use, do not take up disk space at the initial stage of creation, and write the file data from the creation of the snapshot to the snapshot whenever the source file is changed. at this time, the snapshot begins to occupy disk space, and the size is the sum of the changed file size.

Node51:

View an existing mirror

Rbd list

View the mirror snapshot:

Rbd snap ls image / / No display for the moment

Create a snapshot (snap)

The rbd snap create image--snap image-snap1 command explains: rbd snapshot create image name-- snap type snapshot name

View the mirror snapshot again

Rbd snap ls image SNAPID NAME SIZE 4 image-snap1 2048 MB

Use snapshots to restore data

The rbd snap rollback image--snap image-snap1 client unmounts the image and then mounts it again to recover the data.

Delete snapshot

Rbd snap rm image--snap image-snap1

Snapshot Clone

If you want to restore a new mirror from a snapshot, you can make a clone. Before cloning, you need to operate on the snapshot. The protected snapshot cannot be deleted, and the protection can be removed.

Snapshot protection: rbd snap protect image--snap image-snap1

Snapshot Clone

Rbd clone image--snap image-snap1 image-clone-- image-feature layering

Clone View

Rbd info image-clonerbd image 'image-clone': size 2048 MB in 512 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.108a2ae8944a format: 2 features: layering flags: parent: rbd/image@image-snap1 overlap: 2048 MB

Recover data by using clone mirror

Rbd flatten image-clone

Unprotect:

Rbd snap unprotect image--snap image-snap1 client undoes disk mapping

1. Unmount mount point

two。 View rbd disk Mappin

Rbd showmapped

Id pool image snap device

0 rbd image-/ dev/rbd0

3. Undo disk mapping

Rbd unmap demo-image

Rbd unmap / dev/rbd/rbd/demo-image / / two methods

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report