Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build a high availability cluster under CentOS 7

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article focuses on "how to build a highly available cluster under CentOS 7". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to build a highly available cluster under CentOS 7.

First, install cluster software

Software pcs,pacemaker,corosync,fence-agents-all is required. If you need to configure related services, install the corresponding software.

2. Configure firewall 1, disable firewall and selinux# systemctl disable firewalld# systemctl stop firewalld

Modify / etc/sysconfig/selinux to ensure SELINUX=disabled, and then execute setenforce 0 or reboot server to take effect

2. Set firewall rules # firewall-cmd-permanent-add-service=high-availability# firewall-cmd-add-service=high-availability 3. Resolve host names among nodes

Modify the two hostnames to node1 and node2 respectively, directly modify / etc/hostname in centos 7 to add the local hostname and hosttable, and then restart the network service.

# vi / etc/hostnamenode1 # systemctl restart network.service#hostnamenode1

Configure the host table for 2 hosts and add it to / etc/hosts

192.168.122.168 node1192.168.122.169 node2 IV. Time synchronization between nodes

Time synchronization can be done in node1 and node2, respectively, and can be implemented using ntp.

[root@node1 ~] # ntpdate 172.16.0.1 / / 172.16.0.1 is the time server. 5. Configure ssh key-free access between nodes.

The following actions need to be done on each node.

# ssh-keygen-t rsa-P''# this generates a public key with an empty password and a key, and you can copy the public key to the other node # ssh-copy-id-I / root/.ssh/id_rsa.pub root@node2 # the host name of the other party uses the login user name

Both hosts need to be able to communicate with each other, so both hosts have to generate keys and copy public keys for each other. The hosts files on each other's nodes have to resolve each other's hostname, 192.168.122.168 node1 192.168.122.169 node2

# ssh node2 'date';date # Test whether there is mutual trust. 6. Manage highly available clusters through pacemaker. 1. Create cluster users.

In order to facilitate communication between nodes and configure clusters, create a hacluster user on each node, and the password on each node must be the same.

# passwd hacluster Changing password for user hacluster.New password:Retype new password:passwd: all authentication tokens updated successfully.2, set pcsd boot self-boot # systemctl start pcsd.service# systemctl enable pcsd.service3, authentication among nodes in the cluster # pcs cluster auth node1 node2Username: hacluster Password: node1: Authorized node2: Authorized

4. Create and start the cluster

[root@z1 ~] # pcs cluster setup-- start-- name my_cluster node1 node2 node1: Succeedednode1: Starting Cluster...node2: Succeedednode2: Starting Cluster...5, set cluster self-startup # pcs cluster enable-all

6. View cluster status information

[root@z1 ~] # pcs cluster status7, set fence device

This can be referred to.

Corosync enables stonith by default, and the current cluster does not have a corresponding stonith device, so this default configuration is not yet available, which can be verified by the following command:

# crm_verify-L-V

You can disable stonith by using the following command:

# pcs property set stonith-enabled=false (default is true) 8. Configure storage

Highly available clusters can not only use local disks to build pure software mirror cluster systems, but also can use special shared disk devices to build large-scale shared disk cluster systems to fully meet the different needs of customers.

The main shared disks are iscsi or DBRD. Shared disks are not used in this article.

9. Configure floating-point IP

No matter where the cluster service runs, we need a fixed address to provide the service. Here I choose 192.168.122.101 as the floating IP, give it a memorable name ClusterIP and tell the cluster to check it every 30 seconds.

# pcs resource create VIP ocf:heartbeat:IPaddr2 ip=192.168.122.170 cidr_netmask=24 op monitor interval=30s# pcs update VIP op monitor interval=15s10, configure apache service

Install httpd on node1 and node2 and confirm that httpd boot is disabled

# systemctl status httpd.service

Configure the httpd monitoring page (it seems that it can also be monitored through systemd if not configured), which can be executed on node1 and node2, respectively.

# cat > / etc/httpd/conf.d/status.conf

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 298

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report