In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
For an introduction to RHCS clusters, please see http://11107124.blog.51cto.com/11097124/1884048.
In a RHCS cluster, each cluster must have a unique cluster name, at least one fence device (manual fence_manual can be used), at least three nodes, and two nodes must have arbitration disks.
Prepare the environment
Node1:192.168.139.2
Node2:192.168.139.4
Node4:192.168.139.8
VIP:192.168.139.10
Install Luci in Node1 to create a cluster and manage the cluster; install ricci in node2 and node4 to act as cluster nodes
1:ssh mutual trust
2: modify the hostname to node1,node2,node4
3: ntp time synchronization
For detailed preparation process, please refer to http://11107124.blog.51cto.com/11097124/1872079 here.
Share a good red hat official RHCS cluster suite information https://access.redhat.com/site/documentation/zh-CN/Red_Hat_Enterprise_Linux/6/pdf/Cluster_Administration/Red_Hat_Enterprise_Linux-6-Cluster_Administration-zh-CN.pdf here is a detailed official introduction of the RHCS cluster suite, including the detailed use of luci/ricci, and is the Chinese version
Let's start this experiment to install luci/ricci and build a Web cluster.
Install luci in node1, install ricci in node2 and nde4
[root@node1 yum.repos.d] # yum install luci
[root@node2 yum.repos.d] # yum install ricci-y
[root@node4 yum.repos.d] # yum install ricci-y
If the yum installation has the following warning
Warning: RPMDB altered outside of yum.
Execute the following command to get clear yum history information.
[root@node1 yum.repos.d] # rm-rf / var/lib/yum/history/*.sqlite
Start luci
[root@node1 yum.repos.d] # service luci start # # https://node1.zxl.com:8084(web access entry)
Point your web browser to https://node1.zxl.com:8084 (or equivalent) to access luci
Set a password for ricci users on two nodes, which can be different from that of root.
[root@node2 yum.repos.d] # echo 123456 | passwd-- stdin ricci
Changing password for user ricci.
Passwd: all authentication tokens updated successfully.
[root@node4 yum.repos.d] # echo 123456 | passwd-- stdin ricci
Changing password for user ricci.
Passwd: all authentication tokens updated successfully.
Start ricci
[root@node2 cluster] # service ricci start
[root@node4 cluster] # service ricci start
Then use the browser to web login luci port is 8084
A pop-up warning window
Click Manager Clusters-> Create to create a cluster and join the cluster node
Click create Cluster. After clicking to create a cluster, you will have the following actions:
a. If you select download package, the cluster package is downloaded in the node.
b. Install the cluster software in the node (or confirm that the correct software package is installed).
c. Update and promote the group profile in each node of the cluster.
d. The information displayed by the added nodes that joined the cluster indicates that the cluster is being created. When the cluster is ready, the display demonstrates the status of the newly created cluster
Then the following interface appears: please wait automatically installs the corresponding packages of the cluster for node2 and node4 (including cman,rgmanager)
Creation completed
Note: in the normal state, the node Nodes name and Cluster Name are displayed in green, and if an exception occurs, they are displayed in red.
To add Fence devices, the RHCS cluster must have at least one Fence device. Click Fence Devices-- > add
Create Failover Domain
Failover Domain is a failover domain that configures a cluster. Through the failover domain, the switching of services and resources can be limited to specified nodes. The following operation will create a failover domain.
Click Failover Domains-- > Add
Prioritized: whether to enable the domain member priority setting in Failover domain, select enable here.
Restrict: indicates whether service failover restrictions are enabled in failed over domain members. Choose to enable here.
Not failback: indicates that the failback function is used in this domain, that is, when the primary node fails, the standby node will automatically take over the services and resources of the primary node. When the primary node returns to normal, the services and resources of the cluster will automatically switch from the standby node to the primary node.
Then, in the Member check box, select the nodes to join the domain, where the node2 and node4 nodes set the priority of node1 to 1 and Node2 to 2 at "priority".
It should be noted that the node with "priority" set to 1 has the highest priority, and as the numerical value decreases, the node priority decreases in turn.
When all the settings are complete, click the Submit button to start creating the Failover domain.
Now that the cluster node and fence device and failover domain are created, let's start to join Resources, with web service as the column
Click Resources-- > Add to add VIP (192.168.139.10)
Add httpd (all scripts of type script in RHCS are in the / etc/rc.d/init.c/ directory)
Note: Monitor Link tracking connection
Whether Disable Updates to Static Routes stops following new static routes
Number ofSeconds to Sleep After Removing an IP Address timeout
OK, after the resources have been added, the following defines the service group (resources must be within the group to run) the resources defined in the resource group are internal resources
Automatically Start This Service runs this service automatically
Run Exclusive troubleshooting run
Failover Domain failover domain (here you want to select the failover domain that you set up in advance)
Recovery Policy resource transfer strategy
Maximum number of errors in Maximum Number of Failures
Failues Expire Time error timeout
Maximum restart time of Maximum Number of Restars
Restart Expire Time (seconds) restart interval
Click Submit to submit to create a cluster service
Click Web-Service-- > Start to start the cluster service
Web service and Vip started successfully
Test access to VIP with a browser
[root@www init.d] # ip addr show # # you can see that VIP192.168.139.10 is nearly automatically configured
Inet 192.168.139.8/24 brd 192.168.139.255 scope global eth0
Inet 192.168.139.10/24 scope global secondary eth0
[root@www init.d] # clustat # # use this command to view the cluster status
Cluster Status for zxl @ Tue Dec 20 15:43:04 2016
Member Status: Quorate
Member Name ID Status
Node2.zxl.com 1 Online, rgmanager
Node4.zxl.com 2 Online, Local, rgmanager
Service Name Owner (Last) State-service:Web_Service node4.zxl.com started
Perform service transfer to node2
[root@www init.d] # clusvcadm-r Web_Service-m node2
'node2' not in membership list
Closest match: 'node2.zxl.com'
Trying to relocate service:Web_Service to node2.zxl.com...Success
Service:Web_Service is now running on node2.zxl.com
[root@www init.d] # clustat # # transferred successfully
Cluster Status for zxl @ Tue Dec 2016: 21:53 2016
Member Status: Quorate
Member Name ID Status
Node2.zxl.com 1 Online, rgmanager
Node4.zxl.com 2 Online, Local, rgmanager
Service Name Owner (Last) State
Service:Web_Service node2.zxl.com started
Simulate node2 failure to see if the service is transferred automatically
[root@node2 init.d] # halt
Services were successfully transferred to node4
Restart node2 and start ricci, and you can see that node2 has nearly recovered
And because Not failback is not selected when defining the failover domain (that is, the service will be automatically transferred back as soon as the primary node returns to normal), the Web service has been transferred to node2 (because node2 has the highest priority priority of 1, which is the primary node).
Let's add some details about the configuration of luci that I found.
Start, stop, refresh, and delete a cluster you can start, stop, and restart a cluster by performing the following actions in each node of the cluster. Click "Node" at the top of the cluster display on the specific cluster page, and the nodes that make up the cluster will be displayed.
If the cluster service is to be moved to another cluster member, performing startup and restart operations on the cluster node or the entire cluster will cause a temporary interruption of the cluster service because it is running in the node to be stopped or restarted.
To stop the cluster, perform the following steps. This closes the cluster software in the node, but does not remove the cluster configuration information from the node, and the node still appears in the cluster node display, but the status is not a cluster member.
1. Click the check box next to each node to select all nodes in the cluster.
two。 Select "leave Cluster" from the menu at the top of the page, and a message appears at the top of the page indicating that each node is being stopped.
3. Refresh the page to view the status of node updates.
To start the cluster, perform the following steps:
1. Click the check box next to each node to select all nodes in the cluster.
two。 Select the join Cluster function from the menu at the top of the page.
3. Refresh the page to view the status of node updates. To restart a running cluster, first stop all nodes in the cluster, and then start all nodes in the cluster, as described above.
To delete the entire cluster, follow these steps. This causes all cluster services to stop and remove the cluster configuration information from the node and delete them in the cluster display. If you later try to add an existing cluster using a deleted node, luci will show that the node is no longer any set
1. Click the check box next to each node to select all nodes in the cluster.
two。 Select the Delete function from the menu at the top of the page.
If you want to delete a cluster from the luci interface without stopping any cluster services or changing cluster member properties, you can use the Delete option on the manage clusters page
You can also use the luci server component of Conga to perform the following administrative functions for high availability services:
Start the service
Restart the service
Disable servic
Delete a service
Relocate the service
On the specific cluster page, you can click "Service Group" at the top of the cluster display to manage services for the cluster. The services configured for the cluster are displayed.
"start Services"-to start any services that are not currently running, click the check box next to the service to select all the services you want to start, and click start.
Restart Services-to restart any currently running services, click the check box next to the service to select all the services you want to start, and click restart.
Disable Services-to disable any currently running services, click the check box next to the service to select all the services you want to start and click disable.
Delete Services-to delete any currently running services, click the check box next to the service to select all the services you want to start and click Delete.
Relocate Service-to relocate a running service, click the name of the service in the service display. The service configuration page appears and shows the node in which the service is currently running.
In the "start in Node." drop-down box, select the node where you want to relocate the service, and click the start icon. A message appears at the top of the page indicating that the service is being restarted. You can refresh the page to see a new display that indicates that the service is running in the node of your choice.
Note that if the service you selected to run is a vm service, the migrate option will be displayed in the drop-down box instead of the relocate selection
You can also click the service name on the Services page to start, restart, disable or delete stand-alone services. The service configuration page appears. There are icons in the upper-right corner of the service configuration page: "start", "restart", "disable" and "delete".
Backing up and restoring the luci configuration starts with Red Hat Enterprise Linux 6.2, and you can use the following steps to back up the luci database, which is saved in the / var/lib/luci/data/luci.db file. This is not a configuration for the cluster itself, which is saved in the cluster.conf file. Instead, it contains a list of related attributes maintained by users and clusters as well as luci.
By default, the steps generated by the backup are written to the luci.db file in the same directory.
1. Execute service luci stop.
two。 Execute service luci backup-db.
You can choose whether to specify a file name as an argument to the backup-db command, which writes the luci database to that document. For example: to write the luci database to the file / root/luci.db.backup, you can execute the command service luci backup-db / root/luci.db.backup. Note: however, if the backup file is written to a location other than / var/lib/luci/data/ (the backup file name you specified using service luci backup-db will not be displayed in the output of the list-backups command.
3. Execute service luci start.
Use the following procedure to restore the luci database.
1. Execute service luci stop.
two。 Execute service luci list-backups and comment on the filename to restore.
3. Execute service luci restore-db / var/lib/luci/data/lucibackupfile, where lucibackupfile is the backup file to restore.
For example: the following command restores the luci configuration information saved in the backup file luci-backup20110923062526.db: [root@luci1 ~] # service luci restore-db / var/lib/luci/data/luci-backup20110923062526.db
4. Execute service luci start.
If you need to restore the luci database, but you have lost the host.pem file on the machine where you generated the backup due to a complete reinstallation, for example, you need to manually add the cluster back to luci to re-authenticate the cluster node. Use the following steps to restore the luci database on a machine other than the one that generated the backup. Note: in addition to restoring the database itself, you also need to copy the SSL certificate file to ensure that luci is authenticated in the ricci node. In this example, the backup is generated on the luci1 machine and restored on the luci2 machine.
1. Execute the following set of commands to generate a luci backup in luci1 and copy the SSL certificate and luci backup to luci2.
[root@luci1 ~] # service luci stop
[root@luci1 ~] # service luci backup-db
[root@luci1 ~] # service luci list-backups / var/lib/luci/data/luci- backup20120504134051.db
[root@luci1 ~] # scp / var/lib/luci/certs/host.pem / var/lib/luci/data/lucibackup20120504134051.db root@luci2:
two。 On the luci2 machine, make sure that luci is installed and not running. If it is not already installed, install the package.
3. Execute the following set of commands to ensure that authentication is in place, and use luci1 to restore the luci database in luci2.
[root@luci2 ~] # cp host.pem / var/lib/luci/certs/
[root@luci2 ~] # chown luci: / var/lib/luci/certs/host.pem
[root@luci2] # / etc/init.d/luci restore-db ~ / lucibackup20120504134051.db
[root@luci2] # shred-u / host.pem ~ / luci-backup20120504134051.db
[root@luci2 ~] # service luci start
Then share an experimental link about RHCS cluster written by netizens, which is well written http://blog.chinaunix.net/uid-26931379-id-3558613.html.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.