Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build a High availability Cluster with CentOS7

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how to build a high-availability cluster in CentOS7". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to build a high-availability cluster in CentOS7".

First, install cluster software

Software pcs,pacemaker,corosync,fence-agents-all is required, and if you need to configure related services, install the corresponding software.

Second, configure the firewall

1. Prohibit firewall and selinux

# systemctl disable firewalld# systemctl stop firewalld

2. Set firewall rules

# firewall-cmd-- permanent-- add-service=high-availability# firewall-cmd-- add-service=high-availability 3. Host names of each node resolve each other

Modify the two hostnames to node1 and node2 respectively, directly modify / etc/hostname in centos 7 to add the local hostname and hosttable, and then restart the network service.

# vi / etc/hostnamenode1#systemctl restart network.service#hostnamenode1

Configure the host table for 2 hosts and add it to / etc/hosts

192.168.122.168 node1192.168.122.169 node2 IV. Time synchronization between nodes

Time synchronization can be done in node1 and node2, respectively, and can be implemented using ntp.

[root@node1 ~] # ntpdate 172.16.0.1 / / 172.16.0.1 is the time server 5. Configure ssh key-free access between nodes

The following actions need to be done on each node.

# ssh-keygen-t rsa-P''# this generates a public key with an empty password and a key, and you can copy the public key to the other node # ssh-copy-id-I / root/.ssh/id_rsa.pub root@node2 # the host name of the other party uses the login user name

Both hosts need to be able to communicate with each other, so both hosts have to generate keys and copy public keys for each other. The hosts files on each other's nodes have to resolve each other's hostname, 192.168.122.168 node1 192.168.122.169 node2

# ssh node2 'date';date # Test whether there is mutual trust. 6. Manage highly available clusters through pacemaker

1. Create cluster users

In order to facilitate communication between nodes and configure clusters, create a hacluster user on each node, and the password on each node must be the same.

# passwd haclusterChanging password for user hacluster.New password:Retype new password:passwd: all authentication tokens updated successfully.

2. Set pcsd to boot automatically.

# systemctl start pcsd.service# systemctl enable pcsd.service

3. Authentication among the nodes in the cluster

# pcs cluster auth node1 node2Username: hacluster Password: node1: Authorized node2: Authorized

4. Create and start the cluster

[root@z1 ~] # pcs cluster setup-- start-- name my_cluster node1 node2node1: Succeedednode1: Starting Cluster...node2: Succeedednode2: Starting Cluster...

5. Set cluster self-startup

# pcs cluster enable-all

6. View cluster status information

[root@z1 ~] # pcs cluster status

7. Set up fence devices

You can refer to corosync that stonith is enabled by default, but the current cluster does not have a corresponding stonith device, so this default configuration is not yet available, which can be verified by the following command:

# crm_verify-L-V

You can disable stonith by using the following command:

# pcs property set stonith-enabled=false (default is true)

8. Configure storage

Highly available clusters can not only use local disks to build pure software mirror cluster systems, but also can use special shared disk devices to build large-scale shared disk cluster systems to fully meet the different needs of customers. The main shared disks are iscsi or DBRD. Shared disks are not used in this article

9. Configure floating-point IP

No matter where the cluster service runs, we need a fixed address to provide the service. Here I choose 192.168.122.101 as the floating IP, give it a memorable name ClusterIP and tell the cluster to check it every 30 seconds.

# pcs resource create VIP ocf:heartbeat:IPaddr2 ip=192.168.122.170 cidr_netmask=24 op monitor interval=30s# pcs update VIP op monitor interval=15s

10. Configure apache service

Install httpd on node1 and node2 and confirm that httpd boot is disabled.

# systemctl status httpd.service

Configure the httpd monitoring page (it seems that if it is not configured, it can also be monitored through systemd), which is executed on node1 and node2 respectively.

# cat > / etc/httpd/conf.d/status.conf

First, let's create a home page for Apache. The default Apache docroot on centos is / var/www/html, so let's set up a home page under this directory. The node1 node is modified as follows:

[root@node1 ~] # cat / www/html/index.htmlHello node1END

The node2 node is modified as follows:

[root@node2 ~] # cat / www/html/index.htmlHello node2END

The following statement adds httpd to the cluster as a resource:

# pcs resource create WEB apache configfile= "/ etc/httpd/conf/httpd.conf" statusurl= "http://127.0.0.1/server-status"

11. Create a group

Bundle VIP and WEB resource into this group to switch in the cluster as a whole (this configuration is optional).

# pcs resource group add MyGroup VIP# pcs resource group add MyGroup WEB

12. Configure the service startup sequence

To avoid resource conflicts, syntax: (pcs resource group add can also be started according to the order in which it is added, this configuration is optional).

# pcs constraint order [action] then [action] # pcs constraint order start VIP then start WEB

13. Specify the preferred Location (this configuration is optional)

Pacemaker does not require the hardware configuration of your machine to be the same. Some machines may be better configured than others. In this case, we want to set rules such as resources running on a node when it is available. To achieve this effect, we create location constraints. Similarly, we give him a descriptive name (prefer-node1) to indicate how much we want to run the WEB service and how much we want to run on it (we now specify a score of 50, but in a two-node cluster, any value greater than 0 can achieve the desired effect), as well as the name of the target node:

# pcs constraint location WEB prefers node1=50# pcs constraint location WEB prefers node2=45

The higher the score specified here, the more you want to run on the corresponding node.

Resource stickiness (this configuration is optional)

In some environments, it is required to avoid the migration of resources between nodes as much as possible, which usually means that services cannot be provided for a period of time, and some complex services, such as Oracle databases, may take a long time. To achieve this, Pacemaker has a concept called "resource stickiness", which controls how much a service (resource) wants to stay on the node on which it is running. In order to optimize the distribution of resources, Pacemaker sets this value to 0 by default. We can define different stickiness values for each resource, but in general, changing the default stickiness value is sufficient. Resource stickiness indicates whether the resource tends to stay in the current node. If it is a positive integer, a negative number will leave,-inf means negative infinity, and inf means positive infinity.

# pcs resource defaults resource-stickiness=100 command summary: view cluster status: # pcs status view cluster current configuration: # pcs config boot: # pcs cluster enable-all start cluster: # pcs cluster start-all check cluster resource status: # pcs resource show verify cluster configuration: # crm_verify-L-V test resource configuration: # pcs resource debug-start resource set node to standby: # pcs cluster standby node1 Thank you for reading The above is the content of "how to build a high availability cluster in CentOS7". After the study of this article, I believe you have a deeper understanding of how to build a high availability cluster in CentOS7, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report