Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure the High availability of nfs by keepalived+ceph rbd

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you how to configure the high availability of nfs keepalived+ceph rbd, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

1. Create and map rbd block devices

The test machine is osd1 and osd2 host in ceph, and the ip is 192.168.32.3 and 192.168.32.4 respectively.

First create a rbd block device, and then export the same block on both machines:

[root@osd1 keepalived] # rbd ls testtest.img [root@osd1 keepalived] # rbd showmappedid pool image snap device 0 testtest.img-/ dev/rbd0 [root@osd2 keepalived] # rbd showmappedid pool image snap device 0 testtest.img-/ dev/rbd0

Then format / dev/rbd0 and mount it, such as the / mnt directory.

two。 Configure keepalived

Download the latest version of keepalived 1.2.15 from http://www.keepalived.org/download.html, which is relatively easy to install. Just follow the INSTALL instructions for default installation. To facilitate the management of keepalived services, you need to do the following:

Cp / usr/local/etc/rc.d/init.d/keepalived / etc/rc.d/init.d/cp / usr/local/etc/sysconfig/keepalived / etc/sysconfig/mkdir / etc/keepalivedcp / usr/local/etc/keepalived/keepalived.conf / etc/keepalived/cp / usr/local/sbin/keepalived / usr/sbin/chkconfig-- add keepalivedchkconfig keepalived onchkconfig-- list keepalived

Configure / etc/keepalived.conf. The configuration of osd1 is as follows:

[root@osd1 keepalived] # cat keepalived.confglobal_defs {notification_email {} router_id osd1} vrrp_instance VI_1 {state MASTER interface em1 virtual_router_id 100 priority 100 advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.32.5/24}}

The configuration of osd2 is as follows:

[root@osd2 keepalived] # cat keepalived.conf global_defs {notification_email {# admin@example.com} # notification_email_from admin@example.com # smtp_server 127.0.0.1 # stmp_connect_timeout 30 router_id osd2} vrrp_instance VI_1 {state BACKUP interface em1 virtual_router_id 100 priority 99 advert_int 1 authentication {auth_type PASS Auth_pass 1111} notify_master "/ etc/keepalived/ChangeToMaster.sh" notify_backup "/ etc/keepalived/ChangeToBackup.sh" virtual_ipaddress {192.168.32.5 Universe 24}}

Two control scripts are written by osd2, which are executed when the keepalived state of osd2 changes. ChangeToMaster.sh:

[root@osd2 keepalived] # cat ChangeToMaster.sh #! / bin/bashservice nfs startssh lm "umount-f / mnt" ssh lm "mount-t nfs 192.168.32.5:/mnt / mnt"

ChangeToBackup.sh:

[root@osd2 keepalived] # cat ChangeToBackup.sh #! / bin/bashssh lm "umount-f / mnt" ssh osd1 "service nfs stop" ssh osd1 "umount / mnt" ssh osd1 "rbd unmap / dev/rbd0" ssh osd1 "rbd map test/test.img" ssh osd1 "mount / dev/rbd0 / mnt" ssh osd1 "service nfs start" mount-t nfs 192.168.32.5:/mnt / mnt "service nfs stopumount / mntrbd unmap / dev/rbd0rbd map test/test.imgmount / dev/rbd03. Configure nfs

A node in ceph takes advantage of rbd map a block device, which is then formatted and mounted in a directory, such as / mnt. Install the rpm package for nfs on this node:

Yum-y install nfs-utils

Set the mount directory:

Cat / etc/exports / mnt 192.168.101.157 (rw,async,no_subtree_check,no_root_squash) / mnt 192.168.108.4 (rw,async,no_subtree_check,no_root_squash)

Start and export:

Service nfs startchkconfig nfs onexportfs-r

Check on the client:

Showmount-e mon0Export list for mon0:/mnt 192.168.108.4192.168.101.157

Then mount:

Mount-t nfs mon0:/mnt / mnt

It should be noted that UDP protocol is used by default for NFS. If the network is unstable, it can be replaced with TCP protocol:

Mount-t nfs mon0:/mnt / mnt-o proto=tcp-o nolock

4. test

Close the network card em1 of osd1 to view the result:

[root@osd1 keepalived] # ifdown em1 [root@osd1 keepalived] # ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6:: 1/128 scope host valid_lft forever preferred_lft forever2: em1: mtu 1500 qdisc mq state DOWN qlen 1000 link/ether c8:1f : 66:de:5e:65 brd ff:ff:ff:ff:ff:ff

Check the network card of osd2:

[root@osd2 keepalived] # ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6:: 1/128 scope host valid_lft forever preferred_lft forever2: em1: mtu 1500 qdisc mq state UP qlen 1000 link/ether c8:1f:66:f7:61:5d brd Ff:ff:ff:ff:ff:ff inet 192.168.32.4/24 brd 192.168.32.255 scope global em1 valid_lft forever preferred_lft forever inet 192.168.32.5/24 scope global secondary em1 valid_lft forever preferred_lft forever inet6 fe80::ca1f:66ff:fef7:615d/64 scope link valid_lft forever preferred_lft forever

Vip has been drifted to osd2 to check the client mount:

[root@lm /] # df-hTFilesystem Type Size Used Avail Use% Mounted on/dev/sda1 ext4 454G 79G 353G 19% / tmpfs tmpfs 1.7G 4.6M 1.7G 1% / dev/shm192.168.32.5:/mnt nfs 100G 21G 80G 21% / mnt

Open the network card em1 of osd1:

[root@osd1 keepalived] # ifup em1Determining if ip address 192.168.32.3 is already in use for device em1... [root@osd1 keepalived] # ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6:: 1/128 scope host valid_lft forever preferred_lft Forever2: em1: mtu 1500 qdisc mq state UP qlen 1000 link/ether c8:1f:66:de:5e:65 brd ff:ff:ff:ff:ff:ff inet 192.168.32.3/24 brd 192.168.32.255 scope global em1 valid_lft forever preferred_lft forever inet 192.168.32.5/24 scope global secondary em1 valid_lft forever preferred_lft forever inet6 fe80::ca1f:66ff:fede:5e65/64 scope link valid_lft forever preferred_lft forever

Em1 of osd2:

[root@osd2 keepalived] # ip a1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6:: 1/128 scope host valid_lft forever preferred_lft forever2: em1: mtu 1500 qdisc mq state UP qlen 1000 link/ether c8:1f:66:f7:61:5d brd Ff:ff:ff:ff:ff:ff inet 192.168.32.4/24 brd 192.168.32.255 scope global em1 valid_lft forever preferred_lft forever inet6 fe80::ca1f:66ff:fef7:615d/64 scope link valid_lft forever preferred_lft forever

Current client:

[root@lm /] # df-hTFilesystem Type Size Used Avail Use% Mounted on/dev/sda1 ext4 454G 79G 353G 19% / tmpfs tmpfs 1.7G 4.6M 1.7G 1% / dev/shm192.168.32.5:/mnt nfs 100G 21G 80G 21% / mnt [root@lm /] # ls / mnt31.txt a.txt b.txt c.txt etc linux-3.17.4 linux-3.17 .4.tar m2.txt test.img test.img2

You can also configure vip for ceph rbd's iscsi.

The above is all the contents of the article "how to configure the High availability of nfs by keepalived+ceph rbd". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report