Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hardware configuration and use of HP cluster

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

The principle of dual-machine backup: install cluster software (such as HP ServiceGuard) on two host nodes, and configure a floating IP for the client. Floating means that the IP address is timely bound to one of the two nodes, but the IP is fixed to the client. Each node is equipped with three network cards, namely data network card, heartbeat signal network card, and a network card for data and heartbeat backup. Data and heartbeat network card need to configure IP address, backup network card does not match, when data or heartbeat network card fails, backup network card automatically take over data or heartbeat network card IP address. When the ServieGuard starts, once the master node is abnormal, such as the primary node stops, the key processes of the application exit, the network interruption occurs, the standby node immediately starts the preset application, and binds the floating IP to the standby node. The switching time of the whole master and slave is about 2 minutes. After the switch, the client is transparently connected to the standby through the floating IP. When the primary node is troubleshooting, there are two strategies for whether to switch the standby node back to the primary node, one is manual switching, the other is automatic switching, and the default is manual switching.

The composition of ServiceGuard software:

Software component

Package Manager runs the package manager

Cluster Manager Cluster Manager

Network Manager Network Manager

Main background process

Cmcld: node daemon, responsible for sending heartbeats, managing the local network, and managing running packets

Cmlogd: responsible for recording information in the system log (syslog)

Cmlvmd: monitors the status of all volume groups (VG) that belong to the cluster control

Cmsrvassistd: responsible for package startup, stopping scripts and running service programs

The structure of the cluster

1. Node: the host that makes up the cluster. The number of nodes allowed in a cluster is 2-16.

2. The runtime package: the application that contains the user and the resources allocated to the application in the runtime package. The report runs on a running node and can be switched between nodes.

Some concepts about package

1. Include applications

2. Allocate the appropriate resources: ① volume group, logical volume and file system ② floating IP address ③ application start and stop script ④ service program

The rule is that resources assigned to one package cannot be allocated to other packages; a package can only run on one node at a time

The concept of floating IP

Floating IP is assigned to each application (running package). If you want to access an application, you need to connect a floating IP address. Just link to this address, regardless of which host or network card it is actually on.

The floating IP must be loaded on the network card with the static IP address of the same network segment. When the local network card switch occurs, the floating IP will be loaded on the standby network card together with the static IP.

The hardware conditions that make up the cluster

1. Hosts: multiple hosts form a cluster, and each host must have its own independent root disk. It is strongly recommended to mirror images.

2. Data disk: because MC/SG cannot respond to data disk failure, you need to use a high-availability disk array or mirror the data disk.

3. Network: network manager in MC/SG can respond to network failure. Redundant network devices (network cables, switches, network cards, etc.) need to be configured.

Cluster configuration-related files

1 、 / etc/cmcluster/cluster.ascii

Cluster configuration file, including node composition, volume group assignment and related parameter settings, etc.

2 、 / etc/cmcluster/cmclconfig

Cluster binaries, compiled from configuration files, contain all the information about the cluster

3 、 / etc/cmcluster/mscppkg/mscppkg.conf

Package configuration file, specifying the node where the package is located, monitoring network segment, switching mode and other parameter information

4 、 / etc/cmcluster/mscppkg/control.sh

Package control files that specify the various resources owned by the package

5 、 / etc/cmcluster/mscppkg/control.sh.log

Log recorded when the package is running

6 、 / etc/cmcluster/mscppkg/start_mscp.sh

The script used to start the application when the package starts

7 、 / etc/cmcluster/mscppkg/stop_mscp.sh

The script used to stop the application when the package is stopped

8 、 / etc/cmcluster/mscppkg/mscp_service.sh

Package service scripts to perform process monitoring and other customized functions

MC/SG running process-startup process

Start the cluster

1. Start the daemon cmcld on each node

2. All the nodes with cmcld running normally form a cluster

Startup package

1. Activate the volume group, load the floating IP, and suspend the file system

2. Execute the application launcher (start_mscp.sh)

3. Run the service (mscp_service.sh)

MC/SG running process-stop process

Stop the package

1. Stop the service (mscp_service.sh)

2. Execute the application stop script (stop_mscp.sh)

3. Unmount the file system and unmount the floating IP to activate the volume group.

Stop the cluster

1. Stop the daemon cmcld on each node

Cluster related commands

Start the cluster: cmruncl-v stop the cluster: cmhaltcl-v (add-f parameter if there is a package running) start the cluster on only one node: cmruncl-n node name to observe the entire cluster status: cmviewcl-v

Run package related commands

Startup package: cmrunpkg-v-n node name package name stop package: cmhaltpkg-v package name

Set the automatic switching attribute of the package: cmmodpkg-e package name (allows packets to automatically switch between nodes)

Cmmodpkg-e-n node name package name (allows the package to start on this node)

Run package management-manually switch package instances

Switch the package scppkg from mscp1 to mscp2:

Step 1: execute on any host

Cmhaltpkg-v scppkg

Step 2: execute on any host

Cmrunpkg-v-n scp2 scppkg

Step 3: execute on any host

Cmmodpkg-e scppkg

Log check-Syslog

/ var/adm/syslog/syslog.log Syslog

Log check-package log

/ etc/cmcluster/mscppkg/control.sh.log package run log

Emergency treatment plan

If there is an emergency, you need to start the application directly from the dual-computer software.

1. Execute vgchange-c n vgdata to make vg out of the control of MC

2. Execute vgchange-a y vdata to activate vg

3. Execute ifconfig lan 1:1 inet 129.168.120 netmask 255.255.255.0 manually bind the floating IP to the network card

4. Execute the package startup script in / etc/cmcluster/pkg/ or directly execute relevant commands to start the application and database

After the dual-computer configuration is restored

1. Stop the database and application

2. Execute ifconfig lan 1:1 0.0.0.0 to delete floating IP

3. Execute vgchange-an vgdata to activate vg

4. Execute cmruncl-v to start the cluster. The first startup will prevent the package from starting because vg cannot be controlled by MC.

5. When the cluster is in running state, execute vgchange-c y vgdata to add vg to MC control

6. Execute cmrunpkg-v pkg to launch the package

-

Hardware configuration and use of HP cluster

13.5.1 switch-off steps

Power on-power on peripherals (such as disk arrays, etc.) _ power on the host _ wait about 7 minutes and the system is ready.

Power off-- log in as root user (user:root passwd:root) _ turn off Cluster, type cmhaltcl-f type shutdown-hy 0 wait 20 seconds _ power off host _ power off peripherals

13.5.2 HP cluster configuration

The basic hardware configuration of HP cluster is: two hp9000 minicomputers, a disk cabinet, which can be a hard disk image or an AutoRaid (here, for example, the disk array has two hard disks / dev/dsk/c0t5d0, / dev/dsk/c1t5d0). The basic network configuration of the minicomputer is three network cards per machine. In configuration, the first and second network cards are configured with IP addresses, but the third network cards are not equipped with IP addresses. In addition, the network connection needs two HUB, the first network card lan0 is directly connected, the second network card lan1 is connected to the first HUB, the third network card is connected to the second HUB, and the two HUB are directly connected.

1. Check the hardware configuration of both computers:

Use lanscan and netstat-ni commands to view the IP address and subnet corresponding to the specific physical location of each ENI, as an example:

Lanscan

Ardware Station Crd Hdw Net-Interface NM MAC HP-DLPI LPI

Ath Address In# State NamePPA ID Type Support Mjr#

/ 0/0/0 0x001083FF0BF7 0 UP lan0 snap0 1 ETHER Yes 19

/ 5-0-0 0x001083FBA86D 1 UP lan1 snap1 2 ETHER Yes 19

/ 12-0-0 0x001083FB68E9 2 UP lan2 snap2 3 ETHER Yes 19

Hardware Path corresponds to the hardware address of each network card, such as lan0, lan1, and lan2, which are the SLOT numbers of three network cards. By looking up the Path number next to the slot of the network card behind the HP server, you can know which network card corresponds to lan0, lan1, and lan2, and you can see the link layer address (Station Address) of the network card.

IN-212-C1 HP-UX system management

Then type netstat-ni, and you can see the following results:

# netstat-ni

Name Mtu Network Address Ipkts Opkts

Lan2* 1500 none none 00 backup

Lan1 1500 214.216.1.0 214.216.1.134 155322 19407 data line

Lan0 1500 168.1.0.0 168.1.101 63392 36547 heartbeat line

Lo0 4136 127.0.0.0 127.0.0.1 19682 19682

2. Check the software configuration of the dual computer:

The following software is required on the HP server (use the swlist command to view it):

Check the software configuration of the dual machine: the following software is required on the HP9000 server:

HPUXENG32 (64) RT B.11.0 HP-UX operating system

HPUXSCh42 (64) RT B.11.0 HP-UX operating system simplified Chinese environment

UXCoreMedia-S B.11.0 HP-UX simplified Chinese media tool

B3935B (D) An A.11.08 MC/Service Guard 11.08 cluster dual configuration Software

B3919EA_B9U B.11.00 Special Edition HP-UX Unlimited-User Lic

B2491BA B.11.00 MirrorDisk / UX (only if the disk is mirrored)

HP C/ANSI C Developer's Bundle for HP-UX 11.00 (S800)-cc compiler

If there is no above software, please install the above software. Check the existence of the above software and use swlist

| more command to check whether the following software is available.

3. Other preparations

Njzx11 and njzx22 should be able to ping each other the address of the 168.1.7 network segment and the 214.216 network segment.

Check whether the machine name njzx11,njzx22 and the corresponding ip of the two machines are configured in the / etc/hosts file.

The address should be as follows:

Njzx11

Njzx22

214.216.1.133 scp

Chapter 13 introduction to HP Cluster

At this point, the ping machine name should be able to ping on both machines.

Check whether the .rhosts file in the / directory is configured with the machine names njzx11 and njzx22 of the two machines, and the configuration should be

It should be as follows:

Njzx11

Njzx22

At this point, the rcp, rlogin command should be available on both machines

4. Configure cluster (with njzx11 as host and njzx22 as slave)

Operate on njzx11:

# cd / dev

# mkdir vgsybase

# ll / dev/*/group

Crw-r- 1 root sys 64 0x000000 Nov 29 19:26 / dev/vg00/group

Crw-rw-rw- 1 root sys 64 0x020000 Dec 21 10:56 / dev/vgsybase/group

T. create a vgsybase directory under / dev

Find an unused number, such as 0x010000, as the only sign of the vg you created.

# mknod / dev/vgsybase/group c 64 0x01000; build device file

# pvcreate-f / dev/rdsk/c0t5d0; build pv on / dev/dsk/c0t5d0

# pvcreate-f / dev/rdsk/c1t5d0; build pv on / dev/dsk/c1t5d0

# vgcreate / dev/vgsybase / dev/dsk/c0t5d0 / dev/dsk/c1t5d0; create vg using hard disk image:

# lvcreate-L 100-n sybdev-m 1-sy / dev/vgsybase; build lv, lv is named sybdev

Do not use hard disk mirroring:

# lvcreate-L 100n sybdev / dev/vgsybase; build lv, lv name is sybdev

# vgchange-an vgsybase; change vgsybase status to no active

# vgexport-p-s-m / tmp/mapfile / dev/vgsybase; store the configuration information of vgsybse in mapfile

# rcp / tmp/mapfile njzx22:/tmp; copy mapfile to njzx22

IN-212-C1 HP-UX system management

Operate on the njzx22:

# cd / dev

# mkdir vgsybase

# mknode / dev/vgsybase/group c 64 0x010000

# vgimport-s-m / tmp/mapfile / dev/vgsybase

Operate on njzx11:

# cd / etc/cmcluster

# cmquerycl-n njzx11-n njzx22-C cmzxin.ascii; generate cluster default file

Cmzxin.ascii

# Edit cmzxin.ascii file, edit cluster name: zxcluster, parameters

MAX_configused_packages = 0, there are several applications, MAX_configused_packages is just a few, now it is changed to 3

# cmcheckcong-C / etc/cmcluster/cmzxin.ascii; check the cluster configuration file

# cmapplyconf-C / etc/cmcluster/cmzxin.ascii; load cluster and set

The cmzxin.config files are distributed to the two machines.

# cmruncl; start cluster

# cmview-v; observe the status of cluster

# cmhaltcl-f; stop cluster

# cd / etc/cmcluster/zxin10

# Editing the zxin.conf file

FAILBACK_POLICY AUTOMATIC

NODE_NAME njzx11

NODE_NAME njzx22

SUBNET 214.216.1.0

# Editing zxin.cntl files, changing the configuration of LV and FS is related to application

LV [0] = / dev/vgsybase/sybdev6; FS [0] = / data; FS_MOUNT_OPT [0] = "- o rw"

IP [0] = 214.216.1.133

SUBNET [0] = 214.216.1.0

Chapter 13 introduction to HP Cluster

Where LV (0) corresponds to the previously created sybdev6, which can be modified according to the actual situation.

IP (0) corresponds to virtual IP address, and SUBNET (0) corresponds to subnet

# cp zxin10.sh.test zxin10.sh; zxin10.sh.test is the test document of cluster, official text

The piece is zxin10.sh.run

# rcp * njzx22:/etc/cmcluster/zxin10

# cmcheckconf-C / etc/cmcluster/cmzxin.ascii-P zxin10.conf

# cmapplyconf-C / etc/cmcluster/cmzxin.ascii-P zxin10.conf

# cmruncl; start cluster

# cmview-v; observe the status of cluster

When testing cluster with a zxin10.sh.test file, kill the process testcluster on njzx11

After a minute, if the testcluster can get up on the njzx22, the cluster will be switched successfully.

13.5.3 Operation and maintenance

1. Start Cluster

Log in as root user _ type cmruncl _ wait 10 seconds, Cluster ready

2. Close Cluster

Log in as root _ type cmhaltcl-f _ wait 10 seconds before Cluster shuts down

3. Check the running status of Cluster

Log in as root _ type cmviewcl-v

Whether Cluster is healthy or not mainly depends on the state of NODE. If it is up, it will run normally. If it is

Down, the node is not in Cluster. The running of the application mainly depends on the current zxin10pkg running status (STATE). If

Running, the program runs normally. If halting, the program is not running.

4. Switching of application zxin10pkg

In the current Cluster configuration, njzx22 is the host and njzx11 is the standby.

Njzx11_ njzx22:

On the host of njzx11, type: cmmodpkg-e-n njzx22-n njzx11-v

Zxin10pkg

Njzx22_ njzx11:

IN-212-C1 HP-UX system management

On the host of njzx22, type: su-zxin10-c superstop

5. System maintenance is carried out without affecting the normal operation of the program. (take njzx11 as an example)

Check the running status of Cluster to determine the host on which zxin10pkg is running:

1) if zxin10pkg is running on hp1, you should first switch zxin10pkg to njzx22. The steps are as follows: cmmodpkg-e-n njzx22-v zxin10pkg wait 20 seconds on the njzx11 terminal, type cmhaltnode njzx11, type shutdown-hy 0, wait 20 seconds, power off the system.

2) if zxin10pkg is running on njzx22, the steps are as follows: on the terminal of njzx11, type cmhaltnode njzx11 _ type shutdown-hy 0 and wait 20 seconds to power off the system. Update the version without affecting the normal operation of the program:

1) check the running status of Cluster and determine the host on which zxin10pkg is running. Perform the following steps on another host: copy the version source file into the / home/zxin10/src directory. In the / home/zxin10 directory, type make Install to package: tar cvf zxin10.tar * copy the package to another host (assuming njzx22): rcp zxin10.tar njzx22:/home/zxin10

2) on the host on which zxin10pkg is running, first switch zxin10pkg to another host, and then perform the following steps on the local machine: tar xvf zxin10.tar completes the version update on both hosts

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report