In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to install Ceph configuration, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.
1. references
Http://docs.ceph.com/docs/master/
Http://docs.ceph.org.cn/
Https://blog.csdn.net/wylfengyujiancheng/article/details/78461801
Http://www.cnblogs.com/luohaixian/p/8087591.html
Https://www.jianshu.com/p/c22ff79c4452
Https://blog.csdn.net/dengxiafubi/article/details/72957402
Https://q.cnblogs.com/q/75797
Https://blog.csdn.net/reblue520/article/details/52039353
Http://www.d-kai.me/ceph%E7%A7%91%E6%99%AE/
Https://blog.csdn.net/signmem/article/details/78602374
Http://www.cnblogs.com/royaljames/p/9807532.html
Https://cloud.tencent.com/developer/article/1177975
Http://blog.51niux.com/?id=161
two。 Add yum source for ceph
Sudo yum install-y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Edit ceph Library / etc/yum.repos.d/ceph.repo
[Ceph] name=Ceph packages for $basearchbaseurl= https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearchenabled=1gpgcheck=0type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.ascpriority=1[Ceph-noarch]name=Ceph noarch packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarchenabled=1gpgcheck=0type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.ascpriority=1[ceph-source]name=Ceph source packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMSenabled=1gpgcheck=0type=rpm-mdgpgkey=https://mirrors.aliyun.com/ceph/keys/release.ascpriority=1
Yum clean all
Yum install-y ceph-deploy
3. Install the NTP service
Ceph needs to ensure time synchronization, need to install ntp service, this first Baidu bar, very simple, I have time to send out the ntp documents
4. Modify / etc/hosts file
Modify / etc/hosts to configure aliases for each server
10.0.67.15 node1
10.0.67.19 node2
10.0.67.21 node3
You can generally use the machine's hostname for your name, but don't use FQDN (that is, the full domain name, for example, the full domain name of some machines is node1.example.com). At this time, what hostname-s gets is node1. It is recommended that the hostname does not include domains, for example, set to node1
5. Create a user
# install ssh service #
Yum install openssh-server
# create a new user in each Ceph node #
Useradd-d / home/cephuser-m cephuser
# set your own password and remember it well, you will often use #
Passwd cephuser
# ensure that newly created users on each Ceph node have sudo permissions #
Echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee / etc/sudoers.d/cephuser
Sudo chmod 0440 / etc/sudoers.d/cephuser
Modify the ~ / .ssh/config file on the ceph-deploy management node so that ceph-deploy can log in to the Ceph node with the user name you created, without having to specify-- username {username} every time you execute ceph-deploy. This also simplifies the use of ssh and scp. Replace {username} with the user name you created.
Host node1
Hostname node1
User cephuser
Host node2
Hostname node2
User cephuser
Host node3
Hostname node3
User cephuser
# after all the configurations are completed, you can also install ceph-deploy on other nodes and copy / root/.ssh,/data/my-cluster to the standby machine to avoid downtime of the management machine #
6. Turn off the firewall and selinux
7. Install ceph-deploy installation
# create a configuration file directory under which all future ceph-deploy operations will be performed #
Mkdir my-cluster
Cd my-cluster
Ceph-deploy will output the files to the current directory, and you must execute the ceph-deploy command below the my-cluster directory
8. Install ceph cluster
# create a cluster in the default monitoring node (mon) and specify the network domain, which is the network segment of the ceph server #
Ceph-deploy new node1-- public-network 10.0.67.0Universe 24
# install ceph### on all nodes
Ceph-deploy install node1 node2 node3
# # initializing the monitoring node #
Ceph-deploy mon create-initial
# launch mgr### on the default mon node
Ceph-deploy mgr create node1
# create osd and map osd to disk or partition #
Ceph-deploy osd create-data / dev/vda4 node1
Ceph-deploy osd create-data / dev/vda4 node2
Ceph-deploy osd create-data / dev/vda4 node3
# if you want to use cephfs, you need at least one MDS service instance #
Ceph-deploy mds create node1
Ceph-deploy mds create node2
Ceph-deploy mds create node3
# View mds###
Ceph mds stat
# add monitoring nodes #
Ceph-deploy mon add node2
Ceph-deploy mon add node3
# add mgr service nodes corresponding to mon services #
Ceph-deploy mgr create node2
Ceph-deploy mgr create node3
# if an exception occurs, check the public_network of ecph.conf. This is usually the problem #
# # push configuration file #
# ceph-deploy-overwrite-conf config push node1 node2 node3
# ceph-deploy admin node1 node2 node3
# Cluster status #
Ceph-s
# View the tree information of osd #
Ceph osd tree
# View osd disk information #
Ceph osd df
You can view the information of various Map by the following command: ceph osd (mon/pg) dump
# View the status of cluster mon nodes #
Ceph quorum_status-format json-pretty
9. Uninstall the cluster
If you run into trouble somewhere and want to start all over again, you can clear the configuration with the following command:
Ceph-deploy purge node1 node2 node3
Ceph-deploy purgedata node1 node2 node3
Ceph-deploy forgetkeys
Rm-rf ceph*
After reinstalling, ceph.repo re-creates it.
For the osd hard disk that has been mounted, ceph has done multipath (multi-path), and / dev/vda4 cannot be used. No mapping, no mount, and dmsetup remove is required.
Reference: https://blog.csdn.net/reblue520/article/details/52039353
# clear the GPT information of the disk #
Sgdisk-zap-all / dev/vda4
Ll / dev/mapper/
Dmsetup remove / dev/mapper/ceph--xxxxxx
10. Upgrade
# upgrade ceph-deploy tool #
Yum install ceph-deploy python-pushy
# set noout to prevent data rebalance during upgrade. Cancel the setting after the upgrade is completed #
# set is a cluster parameter. Just set any node #
Ceph osd set noout
# upgrade ceph version #
# ceph-deploy install-- release {release-name} ceph-node1 [ceph-node2]
Ceph-deploy install-release nautilus node1 node2 node3
# cancel noout configuration #
Ceph osd unset noout
# restart #
Restart the server in turn to save trouble.
Make sure the cluster is in a healthy state before restarting.
# View status #
Ceph-version
Ceph-s
Ceph mon stat
11. Daily operation of poolforce cephfsrecoverrbd, which will be supplemented later.
Thank you for reading this article carefully. I hope the article "how to install and configure Ceph" shared by the editor will be helpful to everyone. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.