In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the fast installation method of distributed storage Ceph". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "the fast installation method of distributed storage Ceph".
Environmental preparation
System CentOS 7.3
File system XFS
Number of cluster nodes 3
There are a total of three machines, and each node of mon and osd is deployed, where the node0 node deploys its own node and other nodes as a management node through ceph-deploy.
Install ceph-deploy
Execute the following commands on all nodes
Sudo yum install-y yum-utils & & sudo yum-config-manager-- add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ & & sudo yum install-- nogpgcheck-y epel-release & & sudo rpm--import / etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 & & sudo rm / etc/yum.repos.d/dl.fedoraproject.org*
Modify the yum source of ceph to 163,
Sudo vim / etc/yum.repos.d/ceph.repo
The ceph.repo is changed to the following
[ceph] name=cephbaseurl= http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/gpgcheck=0priority=1[ceph-noarch]name=cephnoarchbaseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/gpgcheck=0priority=1[ceph-source]name=Ceph source packagesbaseurl= http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors.163.com/ceph/keys/release.ascpriority=1
Update the software library and install ceph-deploy
Sudo yum update & & sudo yum install ceph-deploy
Ceph node installation
All nodes install NTP
Sudo yum install ntp ntpdate ntp-doc
Make sure that the NTP service is started on each Ceph node and that the same NTP server is used, as detailed in NTP.
Install SSH server on all nodes
Sudo yum install openssh-server
Create a user who deploys Ceph
The ceph-deploy tool must log in to the Ceph node as a normal user who has permission to use sudo without a password because it does not have to enter a password during the installation of the software and configuration files.
1. Create a new user in each Ceph node and create a work user
Sudo useradd-d / home/work-m work sudo passwd work
two。 Ensure that newly created users on each Ceph node have sudo privileges
Echo "work ALL = (root) NOPASSWD:ALL" | sudo tee / etc/sudoers.d/worksudo chmod 0440 / etc/sudoers.d/work
Allow password-less SSH login
Because ceph-deploy does not support entering passwords, you must generate a SSH key on the management node and distribute its public key to each Ceph node. Ceph-deploy attempts to generate an SSH key pair for the initial monitors.
1. Generate SSH key pairs with work users. When prompted with "Enter passphrase", enter directly and the password will be empty.
Ssh-keygen
two。 Copy the public key to each Ceph node, and copy the public key generated by node0 to the node1 and node2 nodes
Ssh-copy-id work@node1ssh-copy-id work@node2
Open the required port
By default, port 6789 is used for communication between Ceph Monitors, and port 6800 is used for communication between OSD. Ceph OSD can use multiple network connections for replication and heartbeat communication with clients, monitors, and other OSD.
Turn off the firewall and selinux
Systemctl stop firewalld & & systemctl disable firewalld & & setenforce 0 & & sed-I'/ SELINUX/s/enforcing/disabled/' / etc/selinux/config
Priority / Preferenc
Sudo yum install yum-plugin-priorities
Create a folder under work user
Mkdir my-cluster & & cd my-cluster creates a cluster
Create a monitor cluster
Ceph-deploy new node0 node1 node2
Here we all node0,node1,node2 as monitor cluster nodes. After the command is executed, the Ceph configuration file, a monitor keyring and a log file will be generated.
In Ceph configuration file
Osd pool default size defaults to 3 copies. If you have multiple network cards, you can write public network under the [global] section of the Ceph configuration file. Public network = {ip-address} / {netmask}, examples are as follows:
Public network = 10.112.101.0 Compact 24
Install Ceph
Ceph-deploy install node0 node1 node2
Configure the initial monitor (s) and collect all keys
Ceph-deploy mon create-initial
Add OSD
Add 3 OSD
Sudo mkdir / var/local/osd0ssh node1sudo mkdir / var/local/osd1exitssh node2sudo mkdir / var/local/osd2exit
Modify directory permissions
Chmod 777 / var/local/osd0/ | chmod 777 / var/local/osd0/* | chmod 777 / var/local/osd1/ | chmod 777 / var/local/osd1/* | chmod 777 / var/local/osd2/ | chmod 777 / var/local/osd2/*
Execute ceph-deploy from the management node to prepare the OSD
Ceph-deploy osd prepare node0:/var/local/osd0 node1:/var/local/osd1 node2:/var/local/osd2
Activate OSD
Ceph-deploy osd activate node0:/var/local/osd0 node1:/var/local/osd1 node2:/var/local/osd2
Use ceph-deploy to copy the configuration file and admin key to the management node and the Ceph node so that you don't have to specify the monitor address and ceph.client.admin.keyring each time you execute the Ceph command line.
Ceph-deploy admin node0 node1 node2
Make sure you have the correct permission to operate on ceph.client.admin.keyring.
Sudo chmod + r / etc/ceph/ceph.client.admin.keyring
View cluster status
Ceph-s
At this point, the rapid construction of the cluster is complete!
Thank you for reading, the above is the content of "Fast installation method of distributed Storage Ceph". After the study of this article, I believe you have a deeper understanding of the rapid installation method of distributed storage Ceph, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.