In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the method of building ceph cluster". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "the method of building ceph cluster".
1. Add epel-release extension source
Yum install-- nogpgcheck-y epel-release
Rpm--import / etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
two。 Add ceph Feed
Vi / etc/yum.repos.d/ceph.repo
[Ceph]
Name=Ceph packages for $basearch
Baseurl= http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch
Enabled=1
Gpgcheck=1
Type=rpm-md
Gpgkey= https://mirrors.aliyun.com/ceph/keys/release.asc
Priority=1
[Ceph-noarch]
Name=Ceph noarch packages
Baseurl= http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
Enabled=1
Gpgcheck=1
Type=rpm-md
Gpgkey= https://mirrors.aliyun.com/ceph/keys/release.asc
Priority=1
[ceph-source]
Name=Ceph source packages
Baseurl= http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
Enabled=1
Gpgcheck=1
Type=rpm-md
Gpgkey= https://mirrors.aliyun.com/ceph/keys/release.asc
Priority=1
3. Prepare to install ceph
Update the host library file: yum update-y
Download ceph-deploy:yum install ceph-deploy-y
Download and install the ntp service: yum install ntp ntpdate ntp-doc openssh-server yum-plugin-priorities-y
Modify the / etc/hosts file to add IP- hostname mapping, for example: 192.168.1.111 node1
Create a directory to place the files after ceph installation and enter it into the directory: mkdir my-cluster; cd my-cluster
Create a new cluster using ceph-deploy: ceph-deploy new node1 (finally, hostname)
Modify the ceph.conf configuration file to add the following
Osd pool default size = 3 # create 3 copies
Public_network = 192.168.1.0 Compact 24 # Public Network
Cluster_network = 192.1681.0 Compact 24 # Cluster Network
4.ceph-deploy download and install ceph program: ceph-deploy install node1
5. Divide three equal-sized partitions, all larger than 10GB: example fdisk / dev/sdb
Ceph-deploy mon create-initial
Ceph-deploy admin node1
Chmod + r / etc/ceph/ceph.client.admin.keyring
Ceph-disk prepare-cluster node1-cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48-fs-type xfs / dev/sdb1
The remaining two partition commands are the same, only need to modify / dev/sdb*
The above uuid can be viewed using ceph-s, which is the string of characters after the first line cluster, which can be modified in the configuration file.
Ceph-disk activate / dev/sdb1
Ceph osd getcrushmap-o a.map
Crushtool-d a.map-o a
Vi a
Rule replicated_ruleset {
Ruleset 0
Type replicated
Min_size 1
Max_size 10
Step take default
Step chooseleaf firstn 0 type osd # defaults to host and modified to osd
Step emit
Crushtool-c a-o b.map
Ceph osd setcrushmap-I b.map
Ceph osd tree
Ceph-s
Thank you for your reading. the above is the content of "how to build ceph Cluster". After the study of this article, I believe you have a deeper understanding of the method of building ceph cluster, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.