In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
CEPH distributed file system
CEPH is a unified, distributed file system designed for excellent performance, reliability and scalability.
CEPH can be easily expanded to several PB capacity, supporting high performance and high reliability of multiple workloads.
There are four parts of CEPH:
Clients client: used to access file system
CMDS metadata server: used to cache and synchronize data
COSD object storage cluster: used to make up storage pool
CMON Cluster Monitor: used to monitor file systems
CEPH is a unified storage system that supports three interfaces:
Object: there is native API, but it is also compatible with swift and S3 API.
Block: supports thin configuration, snapshots, and clones.
File:Posixi API, which supports snapshots.
Characteristics of CEPH:
High scalability: using a normal X86 server, supporting 10 to 1000 servers, and supporting 1TB to PB level expansion.
High reliability: no single point of failure, multiple data copies, automatic management, automatic repair.
High performance: balanced data distribution and high degree of parallelism.
CEPH distributed file system architecture:
PG: can be understood as a group, how many objects (the number of files) can be in this group, and the configuration group is established in the storage pool.
Storage pool (POOL): consists of multiple OSD.
Terminology:
OSD: storing object
MON monitor
NODE: nod
MDS: metadata server
CEPH installation and deployment (YUM version): network deployment is required
V environment requirements: turn off firewall, turn off SElinux, time synchronization, modify hostname, modify HOSTS file, SSH support, create and run users.
# systemctl stop firewalld # turn off the firewall
# setenforce 0 # close SELinux
# vi / etc/hosts # modify HOSTS file
Add content:
1.1.1.19 admin-node
1.1.1.20 node1
1.1.1.21 node2
# vi / etc/sysconfig/network # modify hostname
Add content:
NETWORKING=yes
HOSTNAME=admin-node
# hostname admin-node # temporarily modify the hostname
# yum-y install ntp ntpdate # deployment NTP time synchronization
# / etc/ntp.conf # build a NTP server on your own time
Add content:
Server 127.127.1.0
Fudge 127.127.1.0 stratum 8
# systemctl restart ntpd # restart the NTP server
# ntpdate 1.1.1.19 # synchronize time to NTP server
# timedatectl # View time
# useradd ceph
# echo 123 | passwd ceph-- stdin # create and run users
# ssh-keygen-t rsa-P ""-f ~ / .ssh/id_rsa # create a key pair
# ssh-copy-id-I / root/ceph/.ssh/id_rsa.pub root@node1 # upload the public key to node1
# ssh-copy-id-I / root/ceph/.ssh/id_rsa.pub root@node1 # upload the public key to node2
# cat / root/.ssh/id_rsa.pub > > / root/.ssh/authorized_keys # Import Public KeyStore
V installation of CEPH clustering requirements: YUM libraries, Deploy tools, EPEL sources (network required) should be deployed on each server
# mv CentOS-Base.repo / etc/yum.repos.d/ # restore the CenOS7 default backup YUM library
# mv ceph.repo / etc/yum.repos.d/ # place the REPO file of CEPH in the YUM library
# yum makecache # generate cache
# yum-y install epel-release # install EPEL source
# yum-y install ceph-deploy # install the CEPH-Deploy tool
# ceph-deploy new node1 # configure a Mon node
Note: three configuration files are generated in the location.
# vi ceph.conf
Osd_pool_default_size = 2 # modify the default number of replicas of OSD storage pool
Note: if there are multiple network cards, you can also add: public network = {ip-address} / {netmask} parameter
# ceph-deploy install admin-node node1 node2 # install CEPH on three nodes
Ceph-deploy purgedata admin-node node1 node2
If there is a problem with the installation, you can use these two commands to empty the configuration, redeploy and install.
Ceph-deploy forgetkeys
V initialize the Mon monitor and collect all keys
# ceph-deploy mon create-initial
V create a hard disk (OSD) mount directory
# ssh node1
# mkdir / var/local/osd0
# ssh node2
# mkdir / var/local/osd1
V add OSD and manage activation OSD
# ceph-deploy osd prepare node1:/var/local/osd0 node2:/var/local/osd1
# ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1
V create an administrative user
# ceph-deploy admin admin-node node1 node2
# chmod + r / etc/ceph/ceph.client.admin.keyring # make sure the keyring file has read permission
V check the health of the cluster (deployment completed)
# ceph health
V start all CEPH daemons through the SYSVINIT mechanism
# / etc/init.d/ceph-a start
Note: SYSVINIT: it literally means to run as a system process.
V create a CephFS file system:
# ceph osd pool create pool1 100100 # two new storage pools are required by default
# ceph osd pool create pool2 100 100
# ceph fs new ren pool1 pool2 # two storage pools form a file system
# ceph mds stat
V Mount CephFS file system
# ceph auth list # View the list of authentication passwords
# mount-t ceph-o name=admin,secret= "AQBs2U9Y54/vKRAAgPZxOf6lmbl522mkiJvXRA==" 1.1.1.20 name=admin,secret= 6789 / opt # mount the CephFS file system with password authentication
Installation error:
Glitch 1: no EPEL source solution: install EPEL source # yum-y install epel-release (each)
Failure 2: the solution to the CEPH installation package cannot be found: write the CEPH installation package YUM source path to the YUM library
Fault 3: check the version command did not find a solution: manually install CEPH:yum-y install ceph ceph-radosgw (each) or check the problem with SSH
CEPH distributed file system operation integration
Command
Description
Ceph health
View cluster health status
Ceph status
View statu
Ceph osd stat
View OSD status
Ceph osd dump
View OSD properties
Ceph osd tree
View OSD Tr
Ceph mon dump
View MON properties
Ceph quorum_status
View quota status
Ceph mds stat
View MDS status
Ceph mds dump
View MDS rollover
Ceph osd lspools
View existing storage pools
Ceph osd pool get pool1 pg_num
View pool configuration group properties
Ceph osd pool get pool1 pgp_num
View the total number of pool configuration groups
Ceph osd pool create pool1 100 100
Create a storage pool
Ceph osd pool delete pool1 pool1-yes-i-really-really-mean-it
Delete a storage pool
Ceph osd pool rename pool1 pool2
Rename a storage pool
Rados df
View storage pool statistics
Ceph osd pool mksnap pool1 poolkz1
Storage pool snapshot
Ceph osd pool rmsnap pool1 poolkz1
Delete storage pool snapshot
Ceph osd pool set pool1 size 2
Set the number of object copies
Ceph fs new ren pool1 pool2
Create a CephFS file system
Note: pg: configuration group pgp: total number of location groups
Add and remove OSD nodes (OSD is a storage object)
Ceph-deploy osd prepare node1:/var/local/osd2
Add OSD
Ceph-deploy osd activate node1:/var/local/osd2
Activate OSD
Ceph-w
View OSD status
Ceph osd tree
View OSD Tr
Ceph osd out osd.0
Freeze the node to be removed
Ceph stop osd.0
Stop the node process that will be removed
Ceph osd crush remove osd.0
Remove node information from the cluster
Ceph osd rm 0
Remove nod
Add a metadata server (MDS)
Ceph-deploy mds create node1
Add MDS
Note: at least one metadata server is required to use CephFS.
Add and remove monitors (MON)
Ceph-deploy mon add node2
Add MON
Ceph quorum_status-format json-pretty
Check the quorum status
Ceph-a stop mon.0
Stop the monitor process
Ceph mon remove mon.0
Remove the monitor
Ssh node1
Ceph stop mon | | stop ceph-mon-all
Connect to the monitor remotely and stop all processes related to the monitor
Add object Gateway (RGW)
Ceph-deploy rgw create node1
Add RGW
Note: RGW listens on port 7480 by default. Continue to learn about the CEPH distributed file system for the use of object gateways.
Use of CEPH object Gateway: http://docs.ceph.org.cn/install/install-ceph-gateway/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.