In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Purpose
This document provides readers with hands-on instructions on how to install a cluster, install RAC, and launch a cluster database on IBM AIX HACMP/ES (CRM) 4.4.x. For additional explanations or information on any of these steps, please refer to the reference manual listed at the end of the document.
1. Configure cluster hardware
1.1 minimum hardware list / system requirements
1.1.1 hardware
1.1.2 Software
1.1.3 Patch
1.2 install the disk array
1.3 install network hardware for cluster inline and outreach
two。 Create a cluster
2.1 HACMP/ES software installation
2.2 configure the cluster topology
2.3 synchronize the cluster topology
2.4 configure cluster resources
2.4.1 create a parallel shared volume group on one node
2.4.2 create a shared RAW logical volume
2.4.3 Import volume groups on other nodes
2.4.4 add a parallel cluster resource group
2.4.5 configuring parallel cluster resource groups
2.4.6 create a parallel file system (GPFS)
2.5 synchronize cluster resources
2.6 add nodes to the cluster
2.7 basic cluster management
3. Prepare to install RAC
3.1Configuring shared disks and UNIX preinstallation tasks
3.2.1 configure shared disk
3.2.2 UNIX preinstallation tasks
3.2Using OUI to install RAC
Create a RAC database using ODCA
4. Manage RAC instances
5. Reference manual
1. Configure cluster hardware
1.1 minimum hardware list / system requirements
For a two-node cluster, the following is the recommended minimum hardware list. Check the RAC/IBM AIX certification matrix for the latest hardware / software that currently supports RAC.
1.1.1 hardware
● IBM servers-two IBM servers that can run AIX 4.3.3 or 5L 64-bit
● for IBM or third-party storage products, cluster inline, extranet, switch options, memory, swap partition & CPU requirements, refer to operating system vendors and hardware vendors.
● memory, swap Partition & CPU requirements
● each server must have at least 512m memory, at least 1G swap partition or twice the physical memory or more.
To determine which system memory is used: $/ usr/sbin/lsattr-E-l sys0-a realmem
To determine which swap partition to use: $/ usr/sbin/lsps-a
● requires a 64-bit processor.
1.1.2 Software
● when using IBM AIX 4.3.3:
◆ HACMP/ES CRM 4.4.x
◆ supports database files only with bare logical volumes.
◆ Oracle Server Enterprise Edition 9.0.1 or 9.2.0
● when using IBM AIX 5.1 (5L):
For database files that reside on bare logical volumes:
◆ HACMP/ES CRM 4.4.x
For database files that reside on GPFS:
◆ HACMP/ES 4.4.x (HACMP/CRM not required)
◆ GPFS 1.5
◆ IBM Patch PTF12 and IBM Patch IY34917 or IBM Patch PTF13
◆ Oracle Server Enterprise Edition 9.2.0
◆ Oracle Server Enterprise Edition 9i for AIX 4.3.3 and 5L are in a separate CD package and include RAC.
1.1.3 Patch
IBM cluster nodes may require patches in the following areas:
● IBM AIX operating system environment patch
● Storage firmware patches or microcode upgrades
Patch considerations:
● ensures that all cluster nodes have the same patch level
● do not install any firmware-related patches without the help of qualified personnel
● always gets the latest patch information
● carefully reads the README release notes for all patches
● check Note:211537.1 for a list of operating system patches needed and contact IBM for additional patch requirements.
Use the following command to view currently installed patches:
% / usr/sbin/instfix-I
Verify the installation of a specific patch:
% / usr/sbin/instfix-ivk
For example:
% / usr/sbin/instfix-ivk IY30927
1.2 install the disk array
Before installing the IBM AIX operating system environment and HACMP software, initialize the installation of the disk box or array in conjunction with the HACMP for AIX installation manual and the server hardware installation manual.
1.3 install cluster interconnect and public network hardware
The cluster interconnect and the public network interface do not need to be configured before installing HACMP, but must be configured and available before configuring the cluster.
If ● has not been installed, install the host adapter card HBA to the cluster node first, and refer to the relevant documentation for the installation process.
Clusters with more than 2 nodes of ● require 2 cluster transport connectors, which are Ethernet-based switches. After you install other hardware, you can install the cluster software and configure the interconnection network.
two。 Create a cluster
2.1 IBM HACMP/ES software installation
The HACMP/ES 4.X.X installation and configuration process is accomplished through several main steps. The general process is as follows:
● installation hardware
● installs IBM AIX operating system software
● installs the latest IBM AIX maintenance level and required patches
● installs HACMP/ES 4.X.X on each node
Patches required for ● to install HACMP/ES
● configure Cluster Topology
● synchronizes the cluster topology
● configure cluster resources
● synchronizes cluster resources
Follow the HACMP for AIX 4.X.X installation guide for detailed instructions on the HACMP package you need to install. Required / recommended packages, including the following:
● cluster.adt.es.client.demos
● cluster.adt.es.client.include
● cluster.adt.es.server.demos
● cluster.clvm.rte HACMP for AIX Concurrent
● cluster.cspoc.cmds HACMP CSPOC commands
● cluster.cspoc.dsh HACMP CSPOC dsh and perl
● cluster.cspoc.rte HACMP CSPOC Runtime Commands
● cluster.es.client.lib ES Client Libraries
● cluster.es.client.rte ES Client Runtime
● cluster.es.client.utils ES Client Utilities
● cluster.es.clvm.rte ES for AIX Concurrent Access
● cluster.es.cspoc.cmds ES CSPOC Commands >
● cluster.es.cspoc.dsh ES CSPOC dsh and perl
● cluster.es.cspoc.rte ES CSPOC Runtime Commands
● cluster.es.hc.rte ES HC Daemon
● cluster.es.server.diag ES Server Diags
● cluster.es.server.events ES Server Events
● cluster.es.server.rte ES Base Server Runtime
● cluster.es.server.utils ES Server Utilities
● cluster.hc.rte HACMP HC Daemon
● cluster.msg.En_US.cspoc HACMP CSPOC Messages.
● cluster.msg.en_US.cspoc HACMP CSPOC Messages.
● cluster.msg.en_US.es.client
● cluster.msg.en_US.es.server
● cluster.msg.en_US.haview HACMP HAView Messages.
● cluster.vsm.es ES VSM Configuration Utility
● cluster.clvm.rte HACMP for AIX Concurrent
● cluster.es.client.rte ES Client Runtime
● cluster.es.clvm.rte ES for AIX Concurrent Access
● cluster.es.hc.rte ES HC Daemon
● cluster.es.server.events ES Server Events
● cluster.es.server.rte ES Base Server Runtime
● cluster.es.server.utils ES Server Utilities
● cluster.hc.rte HACMP HC Daemon
● cluster.man.en_US.client.data
● cluster.man.en_US.cspoc.data
● cluster.man.en_US.es.data ES Man Pages-class. English
● cluster.man.en_US.server.data
● rsct.basic.hacmp RS/6000 Cluster Technology
● rsct.basic.rte RS/6000 Cluster Technology
● rsct.basic.sp RS/6000 Cluster Technology
● rsct.clients.hacmp RS/6000 Cluster Technology
● rsct.clients.rte RS/6000 Cluster Technology
● rsct.clients.sp RS/6000 Cluster Technology
● rsct.basic.rte RS/6000 Cluster Technology
You can use the clverify command to verify the installed HACMP software.
# / usr/sbin/cluster/diag/clverify
At the clverify > prompt, enter software
Then at the clverify.software > prompt, enter lpp
You should see messages similar to the following:
Checking AIX files for HACMP for AIX-specific modifications...
* / etc/inittab not configured for HACMP for AIX.
If IP Address Takeover is configured, or the Cluster Manager is to be started on boot, then / etc/inittab must contain the proper HACMP for AIX entries.
Command completed.
-Hit Return To Continue-
2.2 configure the cluster topology
Use the following command:
# smit hacmp
Note: the following is just an example of a generic HACMP configuration. See the HACMP installation and planning documentation for specific examples. This configuration does not include the example of IP takeover network, the smit fast path is being used to navigate to the smit hacmp configuration menu, each configuration screen is obtained from smit hacmp, all configurations are done from one node, and then synchronized to other nodes of the participating cluster.
2.2.1 add a cluster definition:
Smit HACMP-> Cluster Configuration-> Cluster Topology-> Configure Cluster-> Add a Cluster Definintion
Fast path:
# smit cm_config_cluster.add
Add a Cluster Definition
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* * NOTE: Cluster Manager MUST BE RESTARTED in order for changes to be acknowledged.**
* Cluster ID [0]
* Cluster Name [cluster1]
"Cluster ID" and "Cluster Name" can be arbitrary, "Cluster ID" must be a valid number between 0mur9999, and "Cluster Name" can be any string of up to 32 letters.
2.2.2 configure nodes:
Smit HACMP-> Cluster Configuration-> Cluster Topology-> Configure Nodes-> Add Cluster Nodes
Fast path:
# smit cm_config_nodes.add
Add Cluster Nodes
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Node Names [node1 node2]
"Node Names" should be the hostname of the node, they must be alphanumeric and no more than 32 characters. All participating nodes must be entered here and separated by spaces.
2.2.3 add a network adapter
This example uses two Ethernet cards on each node and uses a RS232 serial port as a heartbeat connection:
Node name
IP address
IP tag
Types
Node1
192.168.0.1
Node1srvc
Service
192.168.1.1
Node1stby
Standby
/ dev/tty0
Serial
heartbeat
Node2
192.168.0.2
Node2srvc
Service
192.168.1.2
Node2stby
Standby
/ dev/tty0
Serial
Configure the network adapter:
Smit HACMP-> Cluster Configuration-> Cluster Topology-> Configure Nodes-> Add an Adapter
Fast path:
# smit cm_confg_adapters.add
Add an Adapter
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Adapter IP Label [node1srvc]
* Network Type [ether] +
* Network Name [ipa] +
* Network Attribute public +
* Adapter Function service +
Adapter Identifier []
Adapter Hardware Address []
Node Name [node1] +
Note that "Adapter IP Label" must match the "/ etc/hosts" file, otherwise adapters cannot be mapped to valid IP addresses, clusters cannot be synchronized, "Network Name" is any name of the network configuration, and adapters in this configuration must have the same "Network Name", which is used to determine which adapters are used in the event of an adapter failure.
Add an Adapter
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Adapter IP Label [node1stby]
* Network Type [ether] +
* Network Name [ipa] +
* Network Attribute public +
* Adapter Function standby +
Adapter Identifier []
Adapter Hardware Address []
Node Name [node1] +
Add an Adapter
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Adapter IP Label [node2srvc]
* Network Type [ether] +
* Network Name [ipa] +
* Network Attribute public +
* Adapter Function service +
Adapter Identifier []
Adapter Hardware Address []
Node Name [node2] +
Add an Adapter
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Adapter IP Label [node2stby]
* Network Type [ether] +
* Network Name [ipa] +
* Network Attribute public +
* Adapter Function standby +
Adapter Identifier []
Adapter Hardware Address []
Node Name [node2] +
The following is the serial port configuration:
Add an Adapter
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Adapter IP Label [node1_tty]
* Network Type [rs232] +
* Network Name [serial] +
* Network Attribute serial +
* Adapter Function service +
Adapter Identifier [/ dev/tty0]
Adapter Hardware Address []
Node Name [node1] +
Add an Adapter
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Adapter IP Label [node2_tty]
* Network Type [rs232] +
* Network Name [serial] +
* Network Attribute serial +
* Adapter Function service +
Adapter Identifier [/ dev/tty0]
Adapter Hardware Address []
Node Name [node2] +
Because this is different from an Ethernet card, "Network Name" is also different.
Configure the RS232 adapter using "smit mktty":
# smit mktty
Add a TTY
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
TTY type tty
TTY interface rs232
Description Asynchronous
Terminal Parent adapter sa0
* PORT number [0] +
Enable LOGIN disable +
BAUD rate [9600] +
PARITY [none] +
BITS per character [8] +
Number of STOP BITS [1] +
TIME before advancing to next port setting [0] + #
TERMINAL type [dumb]
FLOW CONTROL to be used [xon] +
[MORE...31]
Make sure "Enable LOGIN" is set to the default "disable", and "PORT number" is the "#" used in / dev/tt#, so if you define it as "0", it will be a "/ dev/tty0" device.
2.3 synchronize the cluster topology
After the cluster topology is configured, it needs to be synchronized, the integrity check of the topology is performed synchronously, and then pushed to each node in the cluster configuration. Configuration synchronization requires root users. There are several ways to do this:
One way to do this is to create a .rhosts file at the root of each node.
An example file for .rhosts:
Node1 root
Node2 root
Ensure that the permissions for the / .rhosts file are 600.
# chmod 600 /. Rhosts
Use remote commands such as rcp to test that the configuration is correct:
From Node 1:
# rcp / etc/group node2:/tmp
Slave Node 2:
# rcp / etc/group node1:/tmp
Synchronize the cluster topology:
Smit HACMP-> Cluster Configuration-> Cluster Topology-> Synchronize Cluster Topology
Fast path:
# smit configchk.dialog
Synchronize Cluster Topology
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
Ignore Cluster Verification Errors? [No] +
* Emulate or Actual? [Actual] +
Note: only the default configuration file of the local node retains your changes for topology simulation, when you run the simulation, restore to the original configuration, run the SMIT command, "Restore System Default Configuration from Active Configuration", we recommend that you take a snapshot before running the simulation. Just to prevent uncontrollable cluster events during simulation, if Cluster Administrator is activated on this node, synchronizing the cluster topology will cause any changes to Cluster Administrator to take effect immediately after the synchronization completes successfully.
2.4 configure cluster resources
In a RAC configuration, only one resource group is required. This resource group is a parallel resource group used to share a volume group. Here is the process of adding a parallel resource group to a shared volume group:
First of all, you need a volume group that is shared between nodes, and two instances of the same cluster database access the same external disk in parallel, which is true parallel access, unlike sharing in a VSD environment. Because several instances access the same files and data at the same time, locks must be managed, and these locks are at the CLVM layer and managed by HACMP.
1) check that the target disk is physically connected to the node of the cluster and can be seen by the node. Enter the lspv command on both nodes.
Note: the hdisk number can be different, depending on the disk configuration of other nodes, and use the second output field of lspv (Pvid) to ensure that the same physical disk is processed from both hosts. Although discontiguous hdisk numbers may not be a problem, IBM recommends using a "ghost" disk to ensure that hdisk numbers are matched between nodes.
2.4.1 create a parallel volume group to be shared on one node
# smit vg
Select Add a Volume Group
Add a Volume Group
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
VOLUME GROUP name [oracle_vg]
Physical partition SIZE in megabytes 32 +
* PHYSICAL VOLUME names [hdisk5] +
Activate volume group AUTOMATICALLY at system restart? No +
Volume Group MAJOR NUMBER [57] +
Create VG Concurrent Capable? Yes +
Auto-varyon in Concurrent Mode? No +
"PHYSICAL VOLUME names" must be a physical disk shared between nodes, and let's not automatically activate the volume group when the system starts, because it is activated by HACMP. And "Auto-varyon in Concurrent Mode?" Should be set to "no" because it is loaded by HACMP in parallel mode. You must select "major number" to make sure that the volume group has the same "major number" on all nodes (note: before you select this number, you must make sure it is free on all nodes). Check all defined major number, enter:
% ls? al / dev/*
Crw-rw---- 1 root system 57, 0 Aug 02 13:39 / dev/oracle_vg
The major number of the oracle_vg volume group is 57.
Ensure that 57 is available on all other nodes and is not used by other devices.
On this volume group, create all the logical volumes and file systems you need.
2.4.2 if you are not using GPFS, then create a shared bare logical volume:
Mklv-yroomdbnameName cntrl1110m'-wimpn'-squarn'-ruminn' usupport_vg 4 hdisk5
Mklv-yroomdbnameName cntrl2 usupport_vg 110m'- wimpn'-squarn'-rhamn' usupport_vg 4 hdisk5
Mklv-y database nameplate systematical 400m'- wishn'-sroomn'-rhamn' usupport_vg 13 hdisk5
Mklv-yroomdblocnameName users120m'-wimpn'-sroomn'-rumbn' usupport_vg 4 hdisk5
Mklv-yearly database nameplate drsyscrafts 90m'- wishn'-sroomn'-rhamn' usupport_vg 3 hdisk5
Mklv-yroomdblocnameplate tools12m'-wimpn'-swindn'-rumbn' usupport_vg 1 hdisk5
Mklv-yinfant dbroomnameplate temp usupport_vg 100m'-wimpn'-spocn'-rhamn' temptp4 hdisk5
Mklv-yearly dbroomnameplate undotbs1 hdisk5 312m'-wimpn'-spocn`-rudn' usupport_vg 10 hdisk5
Mklv-yearly dbroomnameplate undotbs2 hdisk5 312m'- wimpn'-squarn`-rudn' usupport_vg 10 hdisk5
Mklv-y database nameplate log11 usupport_vg 120m'- wishn'-sroomn'-rhamn' usupport_vg 4 hdisk5
Mklv-yearly database nameplate log12 hdisk5 120m'- wimpn'-squarn'-rumbn' usupport_vg 4 hdisk5
Mklv-y database nameplate log21mm 120m'- wishn'-sroomn'-rhamn' usupport_vg 4 hdisk5
Mklv-y database nameplate log22 usupport_vg 120m'- wishn'-sroomn'-rhamn' usupport_vg 4 hdisk5
Mklv-y database nameplate indx70m'- wishn'-sroomn'-rhamn' usupport_vg 3 hdisk5
Mklv-yroomdblocnameplate cwmlites100m'-wign'-swindn'-rumbn' usupport_vg 4 hdisk5
Mklv-yroomdblocnameplate example160m'-wichn'-swindn'-rumbn' usupport_vg 5 hdisk5
Mklv-yroomdblocnameoemrepox20m'-wichn'-swindn'-rumbn' usupport_vg 1 hdisk5
Mklv-y database nameplate spfileplate 5m'- wishn'-sroomn'-rumbn' usupport_vg 1 hdisk5
Mklv-y database nameplate srvmconfession 100m'- wishn'-sroomn'-rumbn' usupport_vg 4 hdisk5
Replace "db_name" with your actual database name, and when the volume group is created with 32 megabytes of partitions, then the seventh field is the number of partitions that make up the file, so for example: if "db_name_cntrl1_110m" needs 110 megabytes, then we need 4 partitions.
The naked partition is created in the "/ dev" directory and is used as a character device
Create 2 files for "mklv-yearly database nameplate cntrl1 hdisk5 110m'- wroomn'-sroomn'-rnsn' usupport_vg 4 file:
/ dev/db_name_cntrl1_110m
/ dev/rdb_name_cntrl1_110m
Change the permissions of the character devices so that the software owner owns them, here is the oracle user:
# chown oracle:dba / dev/rdb_name*
2.4.3 Import (import) volume groups on other nodes
Use importvg to import oracle_vg volume groups on all other nodes.
On the first machine, enter:
% varyoffvg oracle_vg
Import the volume group on the other nodes, use smit vg, and select Import a Volume Group.
Import a Volume Group
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
VOLUME GROUP name [oracle_vg]
* PHYSICAL VOLUME name [hdisk5] +
Volume Group MAJOR NUMBER [57] + #
Make this VG Concurrent Capable? No +
Make default varyon of VG Concurrent? No +
The physical volume name (hdisk) may be different on each node. Use lspv to check the PVID of the disk and make sure to select the same disk as the PVID of the disk on which the volume group was created on the first node, and ensure that the same major number is used. This number must not be defined on all nodes, "Make default varyon of VG Concurrent?" Option should be set to "no"; because the volume group is created in parallel, so "Make this VG Concurrent Capable?" Can be left as "no". After the node varyoff volume group where the volume group was originally created, import the volume group under the command line:
% importvg-V-y h disk#
% chvg-an
% varyoffvg
After each node imports the volume group, be sure to change the owner of the character device, here is oracle:
# chown oracle:dba / dev/rdb_name*
2.4.4 add a parallel cluster resource group
The shared resource in this example is oracle_vg, and create a parallel resource group that manages this resource:
Smit HACMP-> Cluster Configuration-> Cluster Resources-> Define Resource Groups-> Add a Resource Group
Fast path:
# smit cm_add_grp
Add a Resource Group
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Resource Group Name [shared_vg]
* Node Relationship concurrent +
* Participating Node Names [node1 node2] +
"Resource Group Name" is any character and is used to select when configuring a volume group, because we are configuring a shared resource, "Node Relationship" is "concurrent" means a group of nodes will share this resource in parallel, and "Participating Node Names" is a list of nodes that will share this resource separated by spaces.
2.4.5 configuring parallel cluster resource groups
When a resource group is added, it can be configured:
Smit HACMP-> Cluster Configuration-> Cluster Resources-> Change/Show Resources for a Resource Group
Fast path:
# smit cm_cfg_res.select
Configure Resources for a Resource Group
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
Resource Group Name concurrent_group
Node Relationship concurrent
Participating Node Names opcbaix1 opcbaix2
Service IP label [] +
Filesystems [] +
Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems to Export [] +
Filesystems to NFS mount [] +
Volume Groups [] +
Concurrent Volume groups [oracle_vg] +
Raw Disk PVIDs [00041486eb90ebb7] +
AIX Connections Service [] +
AIX Fast Connect Services [] +
Application Servers [] +
Highly Available Communication Links [] +
Miscellaneous Data []
Inactive Takeover Activated false +
9333 Disk Fencing Activated false +
SSA Disk Fencing Activated false +
Filesystems mounted before IP configured false +
Note: the settings for "Resource Group Name", "Node Relationship" and "Participating Node Names" come from the data entered in the previous menu, "Concurrent Volume groups" is the volume group that needs to be pre-created on the shared storage device, and "Raw Disk PVIDs" is the physical volume Ids of each disk that makes up "Concurrent Volume groups". Note that a volume group manages multiple parallel resources, in which case each volume group is separated by spaces, and "Raw Disk PVIDs" is also separated by spaces.
2.4.6 create a parallel file system (GPFS)
For AIX 5.1L, you can also put files into GPFS (GPFS does not need bare logical volumes), in which case, create a GPFS that can hold database files, control files, and log files.
2.5 synchronize cluster resources
After configuring the resource group, you need to synchronize the resources:
Smit HACMP-> Cluster Configuration-> Cluster Resources-> Synchronize Cluster Resources
Fast path:
# smit clsyncnode.dialog
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
Ignore Cluster Verification Errors? [No] +
Un/Configure Cluster Resources? [Yes] +
* Emulate or Actual? [Actual] +
2.6 join a node to the cluster
When the cluster topology and resources are configured, nodes can join the cluster, and it is important to start only one node at the same time, unless you use the cluster single point control feature C-SPOC (Cluster-Single Pointing of Control).
Start the cluster service:
Smit HACMP-> Cluster Services-> Start Cluster Services
Fast path:
# smit clstart.dialog
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Start now, on system restart or both now +
BROADCAST message at startup? False +
Startup Cluster Lock Services? False +
Startup Cluster Information Daemon? True +
Setting "Start now, on system restart or both" to "now" will start the HACMP process immediately, "restart" will update "/ etc/inittab", add an entry to start HACMP, and "both" will start the HACMP process immediately after updating "/ etc/inittab". "BROADCAST message at startup?" can be "true" or "false", and if set to "true", wall information is displayed when the node is joining the cluster. "Startup Cluster Lock Services?" For RAC configuration, it should be set to "false". Setting this parameter to "true" will prevent the cluster from working. If "clstat" is used to monitor the cluster, then "Startup Cluster Information Daemon?" Must be set to true.
Check "/ etc/hacmp.out" for startup information, and when you see a message similar to the following, you can safely start the cluster services of other nodes:
May 23 09:31:43 EVENT COMPLETED: node_up_complete node1
When nodes are being added to the cluster, other nodes will report a successful join message in their "/ tmp/hacmp.out" file:
May 23 09:34:11 EVENT COMPLETED: node_up_complete node1
2.7 basic cluster management
"/ tmp/hacmp.out" is the best place to view cluster information, "clstat" can also be used to verify the health of a cluster, and the "clstat" program can sometimes update the latest cluster information, but sometimes it doesn't work at all. And you must set "Startup Cluster Information Daemon?" when you start the cluster service. For "true", enter the following command to start "clstat":
# / usr/es/sbin/cluster/clstat
Clstat-HACMP for AIX Cluster Status Monitor-Cluster: cluster1 (0) Tue Jul 2 08:38:06 EDT 2002 State: UP Nodes: 2 SubState: STABLE Node: node1 State: UP Interface: node1 (0) Address: 192.168.0.1 State: UP Node: node2 State: UP Interface: node2 (0) Address: 192.168.0.2 State: UP
Another way to check the status of the cluster is to use snmpinfo to query the snmpd process:
# / usr/sbin/snmpinfo-m get-o / usr/es/sbin/cluster/hacmp.defs-v ClusterSubstate.0
"32" should be returned:
ClusterSubState.0 = 32
If other values are returned, look for errors.
You can quickly view the specific processes of HACMP:
Smit HACMP-> Cluster Services-> Show Cluster Services
COMMAND STATUS
Command: OK stdout: yes stderr: no
Before command completion, additional instructions may appear below.Subsystem Group PID Status clstrmgrES cluster 22000 active clinfoES cluster 21394 active clsmuxpdES cluster 14342 active cllockdES lock inoperative clresmgrdES 29720 active
Starting & Stopping Cluster Nodes
Join the node from the cluster, using:
Smit HACMP-> Cluster Services-> Start Cluster Services
Exit the node from the cluster, using:
Smit HACMP-> Cluster Services-> Stop Cluster Services
Fast path:
# smit clstop.dialog
Stop Cluster Services
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* Stop now, on system restart or both now +
BROADCAST cluster shutdown? True +
* Shutdown mode graceful + (graceful or graceful with takeover, forced)
"Shutdown mode" determines whether resources move between nodes when a shutdown occurs.
"forced" is a new feature of HACMP 4.4.1, and when stopped, the application will be controlled by the HACMP event.
"graceful" will stop anything, but cascading and rotating resources will not be switched.
When "graceful with takeover" stops, the resource will be switched.
HACMP/ES log file:
When the cluster starts and stops, all cluster configuration information is logged to "/ tmp/hacmp.out".
3.0 prepare to install RAC
The RAC cluster installation process consists of four main tasks.
3.1. Configure shared disks and UNIX preinstallation tasks.
3.2. Run OUI to install Oracle9i Enterprise Edition and Oracle9i RAC software.
3.3. Create and configure the database.
3.1Configuring shared disks and UNIX preinstallation tasks
3.1.1 configure shared disk
If GPFS is not used, RACE requires that each instance can access a series of unformatted devices on a shared disk system. These shared disks are also known as bare devices, and if your system platform supports an Oracle-certified cluster file system, you can directly save the files needed by RAC to the cluster file system.
If you are using GPFS, you can also save the files needed by RAC directly to the cluster file system.
Oracle instances in the RAC configuration write data to bare devices, update control files, server parameter files, each data file and each redo log file, which are shared by all instances in the cluster.
The Oracle instance in the RAC configuration writes information to the following defined bare devices:
● control file
● spfile.ora
● each data file
● every online redo log file
● Server Manager (SRVM) configuration Information
So it is necessary to define a bare device for each file category, and the Oracle Database configuration Assistant (DBCA) will create a seed database that requires the following configuration:
Naked roll
File
Size
Sample file name
SYSTEM
Tablespace
400 Mb
Db_name_raw_system_400m
USERS
Tablespace
120 Mb
Db_name_raw_users_120m
TEMP
Tablespace
100 Mb
Db_name_raw_temp_100m
UNDO TBS
Tablespace
Per instance 312 Mb
Db_name_raw_undotbsx_312m
CWMLITE
Tablespace
100 Mb
Db_name_raw_cwmlite_100m
EXAMPLE
160 Mb
Db_name_raw_example_160m
OEMREPO
20 Mb
Db_name_raw_oemrepo_20m
INDX
Tablespace
70 Mb
Db_name_raw_indx_70m
TOOLS
Tablespace
12 Mb
Db_name_raw_tools_12m
DRYSYS
Tablespace
90 Mb
Db_name_raw_drsys_90m
First control file
110 Mb
Db_name_raw_controlfile1_110m
Second control file
110 Mb
Db_name_raw_controlfile2_110m
Two ONLINE redo log files
Per instance 120 Mb x 2
Db_name_thread_lognumber_120m
Spfile.ora
5 Mb
Db_name_raw_spfile_5m
Srvmconfig
100 Mb
Db_name_raw_srvmconf_100m
Note: automatic redo management requires one redo tablespace per instance, so you need at least 2 tablespaces described above. Using the naming convention in the table above, bare partitions are identified as database and bare volume types. The naked volume size also uses this method. The string db_name in the sample file name should be replaced with the actual database name, where thread is the thread number of the instance and log number is the log number in a thread. On the node where you are running Oracle Universal Installer, create an ASCII file to identify the naked volume objects shown above, which are required for DBCA to install and create the database. Make
Name the naked volume object in the following format:
Database_object=raw_device_file_path
Examples are as follows:
System1=/dev/rdb_name_system_400m
Spfile1=/dev/rdb_name_spfile_5m
Users1=/dev/rdb_name_users_120m
Temp1=/dev/rdb_name_emp_100m
Undotbs1=/dev/rdb_name_undotbs1_312m
Undotbs2=/dev/rdb_name_undotbs2_312m
Example1=/dev/rdb_name_example_160m
Cwmlite1=/dev/rdb_name_cwmlite_100m
Indx1=/dev/rdb_name_indx_70m
Tools1=/dev/rdb_name_tools_12m
Drsys1=/dev/rdb_name_drsys_90m
Control1=/dev/rdb_name_cntrl1_110m
Control2=/dev/rdb_name_cntrl2_110m
Redo1_1=/dev/rdb_name_log11_120m
Redo1_2=/dev/rdb_name_log12_120m
Redo2_1=/dev/rdb_name_log21_120m
Redo2_2=/dev/rdb_name_log22_120m
You must specify that Oracle uses this file to determine the bare device volume by setting the following environment variable, where filename is the ASCII file created above:
Csh:
Setenv DBCA_RAW_CONFIG filename
Ksh, bash or sh:
DBCA_RAW_CONFIG=filename; export DBCA_RAW_CONFIG
3.1.2 UNIX pre-installation steps
Note: in addition, you can run the installPrep.sh script provided in Note:189256.1 to deal with a lot of UNIX environment problems.
"after configuring the naked volume, perform the following steps as the root user before installation:"
● add Oracle user
● is convinced that osdba groups are defined in the / etc/group file on all nodes of the cluster, and that osdba groups, group numbers, and osoper groups must be assigned during installation, and these group names must be unique on all nodes of the cluster and as part of the Real Application Clusters database. The default osdba and osoper groups are named dba, and an oinstall group is also required. For the primary group, which is the owner of the software, the typical entries look like this:
Dba::101:oracle
Oinstall::102:root,oracle
Here is an example command to create a dba group with the group number 101:
# mkgroup -'A' id='101' users='oracle' dba
● creates an Oracle account on each node so that:
◆, which is a member of the osdba group, such as dba
◆, its primary group is oinstall.
◆ it is only used to install and upgrade Oracle software
◆ has write permission on the remote directory
Here is an example command to create an oracle user:
Smit-> Security & Users-> Users-> Add a User
Fast path:
# smit mkuser
Add a User
Type or select values in entry fields.Press Enter AFTER making all desired changes.
[Entry Fields]
* User NAME [oracle]
User ID [101] #
ADMINISTRATIVE USER? False +
Primary GROUP [oinstall] +
Group SET [] +
ADMINISTRATIVE GROUPS [] +
ROLES [] +
Another user can SU TO USER? True +
SU GROUPS [ALL] +
HOME directory [/ home/oracle]
Initial PROGRAM [/ bin/ksh]
User INFORMATION []
EXPIRATION date (MMDDhhmmyy) [0]
Is this user ACCOUNT LOCKED? +
[MORE...36]
Note that primary group is not "dba" and the use of oinstall is optional, but recommended.
● creates a mount point directory on each node as the top layer of the Oracle software directory, which makes:
The mount point name of the ◆ on each node is unique and the same as the first node.
The ◆ oracle account has read, write and execute permissions for this mount point.
● establishes user trust from the node running Oracle Universal Installer by adding entries to the .rhosts or / etc/hosts.equiv file of the oracle account.
● uses the oracle account to detect trust relationships.
● as an oracle account, if you are prompted for a password, you will set a different password for your oracle account on all nodes. You must correct this problem, or Oracle Universal Installer cannot use the rcp command to copy files to the remote node.
Establish the system environment variables:
● sets up a local bin directory in the user path, such as / usr/local/bin or / opt/bin, which is required to have execute permissions.
● sets the DISPLAY variable to point to the IP address, name, X server and screen of the system (the machine on which you are running OUI).
● sets a temporary directory TMPDIR with at least 20Mb space, and OUI has write permissions.
Establish the Oracle environment variable:
ORACLE_BASE eg / u01/app/oracle
ORACLE_HOME eg / u01/app/oracle/product/901
ORACLE_TERM xterm
NLS_LANG AMERICAN-AMERICA.UTF8 for example
ORA_NLS33 $ORACLE_HOME/ocommon/nls/admin/data
PATH Should contain $ORACLE_HOME/bin
CLASSPATH $ORACLE_HOME/JRE:$ORACLE_HOME/jlib\ $ORACLE_HOME/rdbms/jlib:\ $ORACLE_HOME/network/jlib
● creates the / var/opt/oracle directory and sets the owner to the oracle user.
● verifies the existence of the / opt/SUNWcluster/bin/lkmgr file, which is used by OUI to indicate that this installation is performed on a cluster.
Note: there is a downloadable verification script InstallPrep.sh that runs before installing Oracle Real Application Clusters, which verifies that the system is configured correctly according to the installation manual, and the output of this script will report any additional tasks that need to be completed prior to the successful installation of Oracle 9.x DataServer (RDBMS). This script performs the following validation:
● ORACLE_HOME directory verification
● UNIX User/umask authentication
● UNIX Group authentication
● Memory/Swap authentication
● TMP spatial verification
● Real Application Cluster Option authentication
● Unix Kernel authentication
. / InstallPrep.sh
You are currently logged on as oracle
Is oracle the unix user that will be installing Oracle Software? Y or n
Y
Enter the unix group that will be used during the installation
Default: dba
Dba
Enter Location where you will be installing Oracle
Default: / u01/app/oracle/product/oracle9i
/ u01/app/oracle/product/9.2.0.1
Your Operating System is AIX
Gathering information... Please wait
Checking unix user...
User test passed
Checking unix umask...
Umask test passed
Checking unix group...
Unix Group test passed
Checking Memory & Swap...
Memory test passed
/ tmp test passed
Checking for a cluster...
AIX Cluster test
Cluster has been detected
You have 2 cluster members configured and 2 are curently up
No cluster warnings detected
Processing kernel parameters... Please wait
Running Kernel Parameter Report...
Check the report for Kernel parameter verification
Completed.
/ tmp/Oracle_InstallPrep_Report has been generated
Please review this report and resolve all issues before attempting to install the Oracle Database Software
3.2Using OUI to install RAC
According to these steps, use Oracle Universal Installer to install Oracle Enterprise Edition and Real Application Clusters software, Oracle9i is provided by multiple CDs, during the installation process, you need to replace the CD, and OUI manages the replacement of the CD.
To install the Oracle software, execute the following command:
If you install from CD, log in as root user and load the first Oracle CD:
# mount-rv cdrfs / dev/cd0 / cdrom
At the CD load point, or when installing from disk, execute the "rootpre.sh" script from the Disk1 location, see Oracle9i Installation Guide Release 2 (9.X.X.X.0) for UNIX Systems: AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel and Sun SPARC Solaris for more information.
# / / rootpre.sh
Log in as oracle and run "runInstaller".
$/ / runInstaller
● in the OUI welcome screen, click Next.
● will be prompted for Inventory Location (if this is the first time you run OUI on this system), this is the base directory where OUI will install the files, and the Oracle Inventory definition can be found in the / etc/oraInst.loc file. Click OK.
● validates the group of users who control the installation of Oracle9i software, and if instructions for running / tmp/orainstRoot.sh appear, the pre-installation steps do not complete successfully, typically, the / var/opt/oracle directory does not exist or cannot be written by oracle. Run / tmp/orainstRoot.sh to correct the problem, Oracle Inventory files and other parts are forced to write to the ORACLE_HOME directory, click Next.
● will display the File Location window, do not change the source field, the destination field defaults to the ORACLE_HOME environment variable, click Next.
● selects the product to install, in this case, select Oracle9i Server, and then click Next.
● choose the installation type, select Enterprise Edition, the selection on this screen refers to the installation method, not the database configuration, the next screen allows you to select custom database configuration, click Next.
● chooses the configuration type. In this example, you select Advanced Configuration, customize the configuration database and configure the selected server product, select Customized, and click Next.
● selects the other nodes where you want to install the Oracle RDBMS software. You do not need to select the nodes that are running OUI. Click Next.
● identifies the bare partition to which the Oracle9i RAC configuration information is written. It is recommended that the minimum capacity of this bare partition is 100Mb.
● displays options for Upgrade or Migrate an existing database. Do not select this button. The Oracle Migration tool cannot upgrade a RAC database. If this button is selected, an error will occur.
● displays a summary screen to confirm the RAC database software to be installed, then click Install, OUI will install Oracle9i to the local node, and then copy this information to the other nodes of your choice.
● when Install is selected, OUI will install the Oracle RAC software to the local node and then copy the software to the other nodes selected previously, which will take some time, and during the installation process, OUI will not display information indicating that the component is being installed to another node-the OUI activity may be the only indication that the process is continuing.
Create a RAC database using ODCA
DBCA will create a database for you, and DBCA will use an optimized structure to create your database, which means that DBCA creates your database files, including default server parameter files, using standard file names and file locations. The main steps of DBCA are:
● verifies that you have correctly configured shared disks for each tablespace (for non-clustered file systems)
● creates a database
● configure Oracle Network Services
● starts database instance and snooping
Oracle recommends that you use DBCA to create your database because DBCA pre-configures the database to optimize the environment and gain advantages of Oracle9i features such as server parameter files and automatic redo management. DBCA also allows you to define any tablespace, and even if you need a different data file than the one specified in the DBCA template, you can execute user-specified scripts.
DBCA and Oracle Net Configuration Assistant also accurately configure your RAC environment for various Oracle high availability features and cluster management tools.
● DBCA will be launched as part of the installation process, but can be run manually by executing the dbca command from the $ORACLE_HOME/bin directory, displaying the RAC Welcome page, selecting the Oracle Cluster Database option, and clicking Next.
● displays the Operations page, select the Create a Database option, and click Next.
● displays the Node Selection page, select the node you want to configure as part of the RAC database, and click Next. If the node is lost from Node Selection, execute the $ORACLE_HOME/bin/lsnodes-v command to diagnose the cluster software and analyze the output. If the cluster software is not installed correctly, resolve the problem, and then restart DBCA.
● displays the Database Templates page, unlike the new database, the template includes the data file, select New Database, and then click Next.
The ● Show Details button provides information about the selected database template.
● now, DBCA displays the Database Identification page, enter Global Database Name and Oracle System Identifier (SID), the typical format of Global Database Name is name.domain, for example, mydb.us.oracle.com,SID is used to uniquely identify an instance, in the RAC environment, the specified SID is prefixed by the instance number, for example: MYDB, MYDB1, MYDB2 will be used as instance 1 and instance 2.
● displays the Database Options page, select the options you want to configure, then click Next, and note: if you do not select New Database from the Database Template page, you will not see this screen.
The ● Additional database Configurations button displays additional database features. Make sure both buttons are selected and click OK.
● choose the connection options you want to use from the Database Connection Options page and click Next. Note: if you do not select New Database from the Database Template page, you will not see this screen.
● now, DBCA displays the Initialization Parameters page, which consists of a series of option fields, modify the memory settings, select the File Locations option, update the initialization parameter file name and location, and click Next.
The ● Create persistent initialization parameter file option is selected by default. If you have a cluster file system, enter a file system name, otherwise enter a bare device name, and then click Next.
● File Location Variables? Button to display variable information, click OK.
● All Initialization Parameters? The button displays the initialization parameters dialog box, which displays the values of all initialization parameters and uses a selection box including Ymax N to show whether they have been created and included in the spfile. The instance-specific parameters have an instance value in the instance column. To finish all the entries on the All Initialization Parameters page, click Close. Note: some parameters cannot be changed through this screen. Click Next.
● DBCA immediately displays the Database Storage window, which allows you to enter a file name for each tablespace in the database.
The ● file name is displayed in the Datafiles folder, but any name displayed here can be changed by selecting the Tablespaces icon and then selecting the tablespace object from the expansion tree. You can use a configuration file, see 3.2.1, specified by the environment variable DBCA_RAW_CONFIG. Complete the database storage information and click Next.
● displays the database creation options page, make sure that the option to create the database is selected, and click Finish.
● displays the DBCA summary window, reviews this information, and then clicks OK.
● when using the OK option, the summary screen is closed and DBCA starts creating the database based on the specified value.
Now, a new database exists that can access or assign other programs to work with the Oracle RAC database through Oracle SQL*PLUS.
4.0 Managing RAC instances
Oracle recommends that you use SRVCTL to manage your RAC database environment, and SRVCTL manages configuration information used by several Oracle tools, such as Oracle Enterprise Manager and Intelligent Agent, to discover and monitor nodes in the cluster using configuration information generated by SRVCTL. Before using SRVCTL, make sure that Global Services Daemon (GSD) is running after configuring the database, to use SRVCTL, you must have created configuration information for the database you want to manage, and you must have completed the configuration information using the DBCA or SRVCTL add command.
If this is the first Oracle9i database created on the cluster, then you must initialize the clusterwide SRVM configuration, first, create or edit the / var/opt/oracle/srvConfig.loc file and add a srvconfig_loc=path_name entry, where path_name is a small cluster shared bare volume, for example:
$vi / var/opt/oracle/srvConfig.loc
Srvconfig_loc=/dev/rrac_srvconfig_100m
Then, initialize the naked volume by executing the following command (note: while gsd is running, it cannot be run; for versions prior to 9i Release 2, you need to kill. / jre/1.1.8/bin/... Process to stop gsd, for 9i Release 2, use the gsdctl stop command):
$srvconfig-init
The first time you use the SRVCTL tool to create a configuration, start Global Services Daemon (GSD) on all nodes so that SRVCTL can access your cluster information, and then execute the srvctl add command to let Real Application Clusters know which instances belong to the cluster. The syntax is as follows:
For Oracle RAC v9.0.1:
$gsd
Successfully started the daemon on the local node.
$srvctl add db-p db_name-o oracle_home
Then, enter commands for each instance from each node:
$srvctl add instance-p db_name-I sid-n node
To display detailed configuration information, run:
$srvctl config
Racdb1
Racdb2
$srvctl config-p racdb1
Racnode1 racinst1
Racnode2 racinst2
$srvctl config-p racdb1-n racnode1
Racnode1 racinst1
Start and stop RAC as follows:
$srvctl start-p racdb1
Instance successfully started on node: racnode2
Listeners successfully started on node: racnode2
Instance successfully started on node: racnode1
Listeners successfully started on node: racnode1
$srvctl stop-p racdb2
Instance successfully stopped on node: racnode2
Instance successfully stopped on node: racnode1
Listener successfully stopped on node: racnode2
Listener successfully stopped on node: racnode1
$srvctl stop-p racdb1-I racinst2-s inst
Instance successfully stopped on node: racnode2
$srvctl stop-p racdb1-s inst
PRKO-2035: Instance is already stopped on node: racnode2
Instance successfully stopped on node: racnode1
For Oracle RAC v9.2.0:
$gsdctl start
Successfully started the daemon on the local node.
$srvctl add database-d db_name-o oracle_home [- m domain_name] [- s spfile]
Then enter the command for each instance:
$srvctl add instance-d db_name-I sid-n node
To display detailed configuration information, run:
$srvctl config
Racdb1
Racdb2
$srvctl config-p racdb1-n racnode1
Racnode1 racinst1 / u01/app/oracle/product/9.2.0.1
$srvctl status database-d racdb1
Instance racinst1 is running on node racnode1
Instance racinst2 is running on node racnode2
Start and stop RAC as follows:
$srvctl start database-d racdb2
$srvctl stop database-d racdb2
$srvctl stop instance-d racdb1-I racinst2
$srvctl start instance-d racdb1-I racinst2
$gsdctl stat
GSD is running on local node
$gsdctl stop
For more information about srvctl and gsdctl, see the Oracle9i Real Application Clusters administration manual.
5.0 reference manual
Note: 182037.1-AIX: Quick Start Guide-9.0.1 RDBMS Installation
Note: 201019.1-AIX: Quick Start Guide-9.2.0 RDBMS Installation
Note: 77346.1-Overview of HACMP Classic and / or HACMP/ES
Note:137288.1-Database Creation in Oracle9i RAC
Note:183408.1-Raw Devices and Cluster Filesystems With Real Application Clusters
RAC/IBM AIX certification matrix
Oracle9i Real Application Clusters Installation and Configuration Release 1 (9.0.1)
Oracle9i Real Application Clusters Concepts
Oracle9i Real Application Clusters Administration
Oracle9i Real Application Clusters Deployment and Performance
Oracle9i Installation Guide for Compaq Tru64, Hewlett-Packard HPUX, IBM-AIX, Linux, and Sun Solaris-based systems.
Oracle9i Release Notes
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.