In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
This article refers to Zang Xueyuan teacher's Ceph basic video tutorial to be summarized, thanks again Zang Xueyuan teacher.
Preparation of basic environment
Experimental topology:
First prepare the base environment, because my laptop memory is only 4 GB, so all three virtual machines use the minimum installation.
ceph2:192.168.6.145 Prepare four disks per host, sda as system disk, and the other three as OSD service disks for ceph using ceph3: 192.168.6.146ceph 4: 192.168.6.147systemctl stop firewall d systemctl disable firewalld The requested URL/etc/selimux/configselinux=disabled was not found on this server. //change the mode of selecux hostnamectl set-hostname ceph2 //The host names of the three servers are set to ceph2-3, note that short host names are used here.
Then prepare to install environment variables for the ceph cluster
vim /openrc is required for all three machines //edit a text export username="ceph-admin" //install using ceph-admin general user installation, set a variable here to facilitate later calls export passwd="ceph-admin" export node1="ceph2" //Set the environment variable export node2="ceph3"export node3="ceph4"export node1_ip="192.168.6.145"for the host name /enable/Set environment variable for host ip address export node2_ip="192.168.6.146"export node3_ip="192.168.6.147" Download rpm source for ceph wget -O /etc/yum.repos.d/ceph.repo https://raw.githuusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/ceph.repo Configure ntum install -y ntp pdatentpdate cn.ntp.org.cnsystemctl rehpdsystemctl start ntpdsystemctl enable ntpdate Create deployment user useradd ${username} //Create ceph-admin user for cluster deployment echo "${passwd}"| passwd --stdin ${username}echo "${username} ALL = (root) NOPASSWD:ALL" |sudo tee /etc/sudoers.d/ceph-admin //Set ceph-admin sudo privileges for special commands chmod 0440 /etc/sudoers.d/ceph-admin The requested URL/etc/hosts192.168.6.145 was not found on this server. ceph2 192.168.6.146 ceph3192.168.6.147 ceph4
Configure ssh key-free login for three hosts
su - ceph-adminssh-keygenssh-copy-id ceph-admin@ceph2ssh-copy-id ceph-admin@ceph3ssh-copy-id ceph-admin@ceph4
Deploying clusters using ceph-deploy
Install ceph-deploysudo yum install -y ceph-deploy python-pip //Note that python-pip needs to use epel source, configure epel source in advance mkdir my-cluster //Create installation directory cd my-cluster for node deployment ceph-deploy new ceph2 ceph3 ceph4 //Make sure that the network of the three hosts is interoperable. After installation, three files will be generated under my-cluster directory, ceph.confceph-deploy-ceph.logceph.mon.keyring Edit the ceph.conf configuration file, and add the information sudo vim ~/my-cluster/ceph.confpublic network = 192.168.6.0/24cluster network = 192.168.6.0/24 Install ceph package sudo yum install -y ceph ceph-radosgw at the end. //Install on all three nodes. These two packages need to use the epel source. The other two nodes need to configure the epel source. Configure the initial monitor and collect all the keys: ceph-deploy mon create-initial Copy the configuration information to each node ceph-deploy admin ceph2 ceph3 ceph4 Configure osd. Use the for loop statement to execute (or write it into text to execute) for dev in /dev/sdb /dev/sdc/dev/sdd. //Note the disk name, you can use the lsblk command to view do ceph-deploy disk zap ceph2 $dev ceph-deploy osd create ceph2 --data $dev ceph-deploy disk zap ceph3 $dev ceph-deploy osd create ceph3 --data $dev ceph-deploy disk zap ceph4 $dev ceph-deploy osd create ceph4 --data $devdone After configuring OSD, deploy mgr to monitor the entire cluster ceph-deploy mgr create ceph2 ceph3 ceph4 Open dashboard module, enable browser interface Before opening dashboard module, pay attention, because we use ceph-admin for installation, so we cannot call the file below/etc/ceph/. Change all the files under/etc/ceph directory to ceph-adminsudo chown -R ceph-admin /etc/ceph and then load the dashboard module ceph mgr module enable dashboard After loading the module, check whether port number 7000 is normal listening ss-ntl Open the browser and enter 192.168.6.145:7000 to check the overall status of the ceph storage cluster
At this point, the deployment of the three-node cluster using ceph-deploy is complete
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.