Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize file sharing based on ceph rbd+corosync+pacemaker HA-NFS

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article is about how to implement ceph rbd+corosync+pacemaker HA-NFS-based file sharing. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

1. Architecture diagram

2. Environmental preparation 1.1 IP planning

Two nfs-server hosts supporting rbd: 10.20.18.97 10.20.18.11

Vip:10.20.18.123 is set on the same network segment

1.2Software installation # yum install pacemaker corosync cluster-glue resource-agents# rpm-ivh crmsh-2.1-1.6.x86_64.rpm-nodeps1.3 ssh ntp configuration 1.4 ntp configuration 1.5 configuration hosts (two) # vi / etc/hosts10.20.18.97 SZB-L000590810.20.18.111 SZB-L00054693 Corosync configuration (two) 3.1 configuration corosync# mv / etc/corosync/corosync.conf.example / etc/corosync/corosync.conf# Vi / etc/corosync/corosync.conf# Please read the corosync.conf.5 manual pagecompatibility: whitetanktotem {version: 2secauth: offthreads: 0interface {ringnumber: 0bindnetaddr: 10.20.18.111mcastaddr: 226.94.1.1mcastport: 5405ttl: 1}} logging {fileline: offto_stderr: noto_logfile: yesto_syslog: yeslogfile: / var/log/cluster/corosync.logdebug: offtimestamp: onlogger_subsys {subsys: AMFdebug: off}} amf {mode: disabled} service {ver: 0name: pacemaker} aisexec {user: rootgroup: root}

Bindnetaddr is the node ip

Mcastaddr is a valid multicast address. Feel free to enter it.

3.2 start the corosync# service corosync start3.3 parameter setting (because there are only 2 nodes Ignore legal votes) # crm configure property stonith-enabled=false# sudo crm configure property no-quorum-policy=ignore3.4 to check node status (all online on ok) # crm_mon-1Last updated: Fri May 22 15:56:37 2015Last change: Fri May 22 13:09:33 2015 via crmd on SZB-L0005469Stack: classic openais (with plugin) Current DC: SZB-L0005908-partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes0 Resources configuredOnline: [SZB-L0005469 SZB-L0005908] 4. Pacemaker resource configuration

Note: Pacemaker mainly manages resources. In this experiment, in order to build rbd-nfs, we will manage resources such as rbd map, mount, nfs-export, vip and so on. In short, rbd to nfs sharing is implemented automatically.

4.1 formatting rbd

(the image created in this lab is share/share2). You only need to do it once on one node.

# rados mkpool share# rbd create share/share2-size 102 rbd map share/share2# rbd showmapped# mkfs.xfs / dev/rbd1# rbd unmap share/share24.2 Resource pacemaker configuration 4.2.1 prepare rbd.in script

(copy the script src/ocf/rbd.in in the ceph source code to the following directory, and all nodes do it)

# mkdir / usr/lib/ocf/resource.d/ceph# cd / usr/lib/ocf/resource.d/ceph/# chmod + rbd.in

Note: the following configuration of a single node to do

4.2.2 configuring rbd map

(you can use the crm configure edit command to copy the following directly)

# primitive p_rbd_map_1 ocf:ceph:rbd.in\ params user=admin pool=share name=share2 cephconf= "/ etc/ceph/ceph.conf"\ op monitor interval=10s timeout=20s4.2.3 mount file system # primitive p_fs_rbd_1 Filesystem\ params directory= "/ mnt/share2" fstype=xfs device= "/ dev/rbd/share/share2" fast_stop=no\ op monitor interval=20s timeout=40s\ op start interval=0 timeout=60s\ op stop interval=0 timeout=60s4.2.4 nfs-exportprimitive p_export_rbd_1 exportfs\ params directory= "/ mnt/share2" Clientspec= "10.20.0.0amp 24" options= "rw Async,no_subtree_check No_root_squash "fsid=1\ op monitor interval=10s timeout=20s\ 4.2.5 VIP configuration primitive p_vip_1 IPaddr\ params ip=10.20.18.123 cidr_netmask=24\ op monitor interval=54.2.6 nfs service configuration primitive p_rpcbind lsb:rpcbind\ op monitor interval=10s timeout=30sprimitive p_nfs_server lsb:nfs\ op monitor interval=10s timeout=30s4.3 source group configuration group g_nfs p_rpcbind p_nfs_servergroup g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export _ rbd_1 p_vip_1clone clo_nfs g_nfs\ meta globally-unique= "false" target-role= "Started" 4.4.Resource location rules location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L00054694.5 to view the overall configuration (can be skipped) # crm configure editnode SZB-L0005469node SZB-L0005908primitive p_export_rbd_1 exportfs\ params directory= "/ mnt/share2" clientspec= "10.20.0.0MB 24" options= "rw Async,no_subtree_check No_root_squash "fsid=1\ op monitor interval=10s timeout=20s\ op start interval=0 timeout=40sprimitive p_fs_rbd_1 Filesystem\ params directory=" / mnt/share2 "fstype=xfs device=" / dev/rbd/share/share2 "fast_stop=no\ op monitor interval=20s timeout=40s\ op start interval=0 timeout=60s\ op stop interval=0 timeout=60sprimitive p_nfs_server lsb:nfs\ op monitor interval=10s timeout=30sprimitive p_rbd_map_1 ocf:ceph:rbd.in\ params user=admin pool=share name=share2 cephconf=" / etc/ceph/ceph.conf "\ op monitor interval=10s timeout=20sprimitive p_rpcbind lsb:rpcbind\ Op monitor interval=10s timeout=30sprimitive p_vip_1 IPaddr\ params ip=10.20.18.123 cidr_netmask=24\ op monitor interval=5group g_nfs p_rpcbind p_nfs_servergroup g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1clone clo_nfs g_nfs\ meta globally-unique=false target-role=Startedlocation l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469property cib-bootstrap-options:\ dc-version=1.1.10 -14.el6-368c726\ cluster-infrastructure= "classic openais (with plugin)"\ symmetric-cluster=true\ stonith-enabled=false\ no-quorum-policy=ignore\ expected-quorum-votes=2rsc_defaults rsc_defaults-options:\ resource-stickiness=0\ migration-threshold=14.6 restart the corosync service (two) # service corosync restart# crm_mon-1Last updated: Fri May 22 16:55:14 2015Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469Stack: classic openais (with plugin) Current DC: SZB-L0005908-partition With quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured 2 expected votes8 Resources configuredOnline: [SZB-L0005469 SZB-L0005908] Resource Group: g_rbd_share_1p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005469 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005469 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005469 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005469 Clone Set: clo_nfs [g_nfs] Started: [SZB -L0005469 SZB-L0005908] 5 Test 5.1 View mount point (via virtual Ip) # showmount-e 10.20.18.123Export list for 10.20.18.123:/mnt/share2 10.20.0. 2 failover testing

# service corosync stop # SZB-L0005469 execution # crm_mon-1 # SZB-L0005908 execution Last updated: Fri May 22 17:14:31 2015Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469Stack: classic openais (with plugin) Current DC: SZB-L0005908-partition WITHOUT quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured 2 expected votes8 Resources configuredOnline: [SZB-L0005908] OFFLINE: [SZB-L0005469] Resource Group: g_rbd_share_1p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005908 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005908 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005908 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005908 Clone Set: clo_nfs [g_nfs] Started: [SZB-L0005908] Stopped: [SZB-L0005469] Thank you for reading! This is the end of this article on "how to achieve file sharing based on ceph rbd+corosync+pacemaker HA-NFS". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report