Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Create a Linux high availability cluster using RHCS

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Basic environment preparation environment topology diagram

Linux basic Service Settings

Close iptables

# / etc/init.d/iptables stop

# chkconfig iptables off

# chkconfig list | grep iptables

Close selinux

# setenforce 0

# vi / etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing-SELinux security policy is enforced.

# permissive-SELinux prints warnings instead of enforcing.

# disabled-No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

# targeted-Only targeted network daemons are protected.

# strict-Full SELinux protection.

SELINUXTYPE=targeted

Close NetworkManager

# / etc/init.d/NetworkManager stop

# chkconfig NetworkManager off

# chkconfig list | grep NetworkManager

Dual-computer mutual trust

# mkdir ~ / .ssh

# chmod 700 ~ / .ssh

# ssh-keygen-t rsa

Enter

Enter

Enter

# ssh-keygen-t dsa

Enter

Enter

Enter

N1PMCSAP01 execution

# cat ~ / .ssh/*.pub > > ~ / .ssh/authorized_keys

# ssh N1PMCSAP02 cat ~ / .ssh/*.pub > > ~ / .ssh/authorized_keys

Yes

The password of N1PMCSAP02

# scp ~ / .ssh/authorized_keys N1PMCSAP02:~/.ssh/authorized_keys

Store multipath configuration

Refresh the IBM storage path using the autoscan.sh script

View the underlying WWID of storage

NAME

WWID

Capcity

Path

Dataqdisk

360050763808101269800000000000003

5GB

four

Data

360050763808101269800000000000001

328GB

four

Create a multipath profile

# vi / etc/multipath.conf

Blacklist devnode defaults to ^ sd [a] needs to be modified to ^ sd [I] the Root partition of this environment is sdi, shielding the local hard disk.

After configuration, restart the multipathd service

/ etc/init.d/multipathd restart

Refresh the storage path using the multipath-v2 command

Binding of multiple network cards

Network card binding topology diagram

Modify vi / etc/modules.conf

Create a network card binding profile

N1PMCSAP01 Bond configuration

[root@N1PMCSAP01 /] # vi / etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2

# HWADDR=A0:36:9F:DA:DA:CD

TYPE=Ethernet

UUID=3ca5c4fe-44cd-4c50-b3f1-8082e1c1c94d

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=none

MASTER=bond0

SLAVE=yes

[root@N1PMCSAP01 /] # vi / etc/sysconfig/network-scripts/ifcfg-eth4

DEVICE=eth4

# HWADDR=A0:36:9F:DA:DA:CB

TYPE=Ethernet

UUID=1d47913a-b11c-432c-b70f-479a05da2c71

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=none

MASTER=bond0

SLAVE=yes

[root@N1PMCSAP01 /] # vi / etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0

# HWADDR=A0:36:9F:DA:DA:CC

TYPE=Ethernet

UUID=a099350a-8dfa-4d3f-b444-a08f9703cdc2

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=satic

IPADDR=10.51.66.11

NETMASK=255.255.248.0

GATEWAY=10.51.71.254

N1PMCSAP02 Bond configuration

[root@N1PMCSAP02 ~] # cat / etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2

# HWADDR=A0:36:9F:DA:DA:D1

TYPE=Ethernet

UUID=8e0abf44-360a-4187-ab65-42859d789f57

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=none

MASTER=bond0

SLAVE=yes

[root@N1PMCSAP02 ~] # cat / etc/sysconfig/network-scripts/ifcfg-eth4

DEVICE=eth4

# HWADDR=A0:36:9F:DA:DA:B1

TYPE=Ethernet

UUID=d300f10b-0474-4229-b3a3-50d95e6056c8

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=none

MASTER=bond0

SLAVE=yes

[root@N1PMCSAP02 ~] # cat / etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0

# HWADDR=A0:36:9F:DA:DA:D0

TYPE=Ethernet

UUID=2288f4e1-6743-4faa-abfb-e83ec4f9443c

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=static

IPADDR=10.51.66.12

NETMASK=255.255.248.0

GATEWAY=10.51.71.254

Host Host configuration

Configure hosts files in N1PMCSAP01 and N1PMCSAP02

# Vi / etc/hosts

RHEL local source configuration

More / etc/yum.repos.d/rhel-source.repo

[rhel_6_iso]

Name=local iso

Baseurl= file:///media

Gpgcheck=1

Gpgkey= file:///media/RPM-GPG-KEY-redhat-release

[HighAvailability]

Name=HighAvailability

Baseurl= file:///media/HighAvailability

Gpgcheck=1

Gpgkey= file:///media/RPM-GPG-KEY-redhat-release

[LoadBalancer]

Name=LoadBalancer

Baseurl= file:///media/LoadBalancer

Gpgcheck=1

Gpgkey= file:///media/RPM-GPG-KEY-redhat-release

[ResilientStorage]

Name=ResilientStorage

Baseurl= file:///media/ResilientStorage

Gpgcheck=1

Gpgkey= file:///media/RPM-GPG-KEY-redhat-release

[ScalableFilesystem]

Name=ScalableFileSystem

Baseurl= file:///media/ScalableFileSystem

Gpgcheck=1

Gpgkey= file:///media/RPM-GPG-KEY-redhat-release

File system formatting

[root@N1PMCSAP01 /] # pvdisplay

Connect () failed on local socket: No such file or directory

Internal cluster locking initialisation failed.

WARNING: Falling back to local file-based locking.

Volume Groups with the clustered attribute will be inaccessible.

-Physical volume

PV Name / dev/sdi2

VG Name VolGroup

PV Size 556.44 GiB / not usable 3.00 MiB

Allocatable yes (but full)

PE Size 4.00 MiB

Total PE 142448

Free PE 0

Allocated PE 142448

PV UUID 0fSZ8Q-Ay1W-ef2n-9ve2-RxzM-t3GV-u4rrQ2

-Physical volume

PV Name / dev/mapper/data

VG Name vg_data

PV Size 328.40 GiB / not usable 1.60 MiB

Allocatable yes (but full)

PE Size 4.00 MiB

Total PE 84070

Free PE 0

Allocated PE 84070

PV UUID kJvd3t-t7V5-MULX-7Kj6-OI2f-vn3r-QXN8tr

[root@N1PMCSAP01 /] # vgdisplay

System ID

Format lvm2

Metadata Areas 1

Metadata Sequence No 4

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 3

Open LV 3

Max PV 0

Cur PV 1

Act PV 1

VG Size 556.44 GiB

PE Size 4.00 MiB

Total PE 142448

Alloc PE / Size 142448 / 556.44 GiB

Free PE / Size 0 / 0

VG UUID 6q2td7-AxWX-4K4K-8vy6-ngRs-IIdP-peMpCU

-Volume group

VG Name vg_data

System ID

Format lvm2

Metadata Areas 1

Metadata Sequence No 3

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 1

Open LV 1

Max PV 0

Cur PV 1

Act PV 1

VG Size 328.40 GiB

PE Size 4.00 MiB

Total PE 84070

Alloc PE / Size 84070 / 328.40 GiB

Free PE / Size 0 / 0

VG UUID GfMy0O-QcmQ-pkt4-zf1i-yKpu-6c2i-JUoSM2

[root@N1PMCSAP01 /] # lvdisplay

-Logical volume

LV Path / dev/vg_data/lv_data

LV Name lv_data

VG Name vg_data

LV UUID 1AMJnu-8UnC-mmGb-7s7N-P0Wg-eeOj-pXrHV6

LV Write Access read/write

LV Creation host, time N1PMCSAP01, 2017-05-26 11:23:04-0400

LV Status available

# open 1

LV Size 328.40 GiB

Current LE 84070

Segments 1

Allocation inherit

Read ahead sectors auto

-currently set to 256

Block device 253:5

Installation of RHCS software components

# yum-y install cman ricci rgmanager luci

Modify ricci username password

# passwd ricci

Ricc password:redhat

RHCS Cluster Settings LUCI graphical Interface

Log in to https://10.51.56.11:8084/ using a browser

Username root, password redhat (default password not changed)

Cluster creation

Click Manage Cluster in the upper left corner to create a cluster

Node Name

NODE ID

Votes

Ricci user

Ricci password

HOSTNAME

N1PMCSAP01-PRIV

one

one

Ricci

Redhat

N1PMCSAP01-PRIV

N1PMCSAP02-PRIV

two

one

Ricci

Redhat

N1PMCSAP02-PRIV

Fence Device configuration

Since the cluster has only 2 nodes, brain fissure may occur, so when the host fails, it is necessary to use the Fence mechanism to arbitrate which host leaves the cluster. The Fence best practice is to use the motherboard integrated IMM port, IBM is called IMM module, and HPE is IDRAC. The purpose is to force the server to restart POST, thus achieving the purpose of releasing resources. The RJ45 port in the upper left corner is the IMM port.

N1PMCSAP01 Fence device setting parameters, user name: USERID, password: PASSW0RD

N1PMCSAP02 Fence device setting parameters, user name: USERID, password: PASSW0RD

Failover Domain configuration

Cluster resource configuration

Generally speaking, an application should contain the resources it depends on, such as IP address, storage media, service script (application)

The attribute parameters of various resources need to be configured in the cluster resource configuration. In the figure 10.51.66.1 is the IP resource, which defines the IP address of the cluster application.

Lv_data is a cluster disk resource that defines the mount point of the disk and the location of the physical block device.

McsCluster defines the location of the startup script for the application startup script

Service Groups configuration

Service Group can define an application or a group of applications, including all the resources required by the application, as well as startup priority. In this interface, the administrator can manually switch the host on which the service is running, and the recovery policy.

Other advanced settings

To prevent Fence back and forth when the cluster joins, the Post Join Delay is configured for 3 seconds.

View Cluster Profil

Finally, you can view all the configurations of the cluster with the command.

# cat / etc/cluster/cluster.conf

[root@N1PMCSAP01 ~] # vi / etc/cluster/cluster.conf

The configuration file is consistent between N1PMCSAP01 and N1PMCSAP02 hosts.

RHCS clusters use methods to view cluster status

Use the Clustat command to view the running status

[root@N1PMCSAP01 /] # clustat-l

Cluster Status for N1PMCSAP @ Fri May 26 14:13:25 2017

Member Status: Quorate

Member Name ID Status

N1PMCSAP01-PRIV 1 Online, Local, rgmanager

N1PMCSAP02-PRIV 2 Online, rgmanager

Service Information

--

Service Name: service:APP

Current State: started

Flags: none (0)

Owner: N1PMCSAP01-PRIV

Last Owner: N1PMCSAP01-PRIV

Last Transition: Fri May 26 13:55:45 2017

Current State is running for the service

Owner is the node on which the service is running

Manually switch clusters

[root@N1PMCSAP01 /] # clusvcadm-r APP (Service Group Name)-m N1PMCSAP02 (Host Member)

# manually switch from Node 1 to Node 2

[root@N1PMCSAP01 /] # clusvcadm-d APP (Service Group Name)

# stop APP service manually

[root@N1PMCSAP01 /] # clusvcadm-e APP (Service Group Name)

# enable APP service manually

[root@N1PMCSAP01 /] # clusvcadm-M APP (Service Group Name)-m N1PMCSAP02 (Host Member)

# set Owner priority to N1PMCSAP02 priority

Automatically switch clusters

When the hardware of the server fails and the heartbeat network is unreachable, the RHCS cluster automatically restarts the failed server and switches resources to another server in good condition. When the hardware is repaired, the administrator can choose whether to fail back the service.

Note:

Do not unplug the heartbeat network of two servers at the same time, it will cause brain fissure.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report