Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of re-adding Node in Oracle 11g RAC

2025-03-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article shows you the example analysis of Oracle 11g RAC re-adding nodes, which is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

Environment:

Suse 11sp4

Oracle 11.2.0.4.180116 RAC

During the installation of Oracle11g rac, the Node 1 operating system needs to be reinstalled due to the hardware of the host. At present, the cluster has been fully installed and the database has not been created yet.

The detailed operation procedure is as follows:

Grid@XXXXXrac2:~ > olsnodes-s-t

XXXXXrac1 Inactive Unpinned

XXXXXrac2 Active Unpinned

Grid@XXXXXrac2:~ > crsctl stat res-t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

Ora.LISTENER.lsnr

ONLINE ONLINE XXXXXrac2

Ora.DATA.dg

ONLINE ONLINE XXXXXrac2

Ora.asm

ONLINE ONLINE XXXXXrac2 Started

Ora.gsd

OFFLINE OFFLINE XXXXXrac2

Ora.net1.network

ONLINE ONLINE XXXXXrac2

Ora.ons

ONLINE ONLINE XXXXXrac2

Cluster Resources

Ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE XXXXXrac2

Ora.cvu

1 ONLINE ONLINE XXXXXrac2

Ora.XXXXXrac1.vip

1 ONLINE INTERMEDIATE XXXXXrac2 FAILED OVER

Ora.XXXXXrac2.vip

1 ONLINE ONLINE XXXXXrac2

Ora.oc4j

1 ONLINE ONLINE XXXXXrac2

Ora.scan1.vip

1 ONLINE ONLINE XXXXXrac2

-- Delete a node:

/ oracle/xxxx/grid/bin/crsctl delete node-n XXXXXrac1 (two-node root execution)

XXXXXrac2:~ # / oracle/xxxx/grid/bin/crsctl delete node-n XXXXXrac1

CRS-4661: Node XXXXXrac1 successfully deleted.

XXXXXrac2:~ #

Grid@XXXXXrac2:~ > olsnodes-s-t

XXXXXrac2 Active Unpinned

-- clear the VIP information of a node

XXXXXrac2:~ # / oracle/xxxx/grid/bin/srvctl stop vip-I XXXXXrac1-f

XXXXXrac2:~ # / oracle/xxxx/grid/bin/srvctl remove vip-I XXXXXrac1-f

Update inventory information on two nodes

Grid@XXXXXrac2:~ > / oracle/xxxx/grid/oui/bin/runInstaller-updateNodeList ORACLE_HOME=/oracle/xxxx/grid "CLUSTER_NODES=XXXXXrac2" CRS=TRUE-silent

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed

The inventory pointer is located at / etc/oraInst.loc

The inventory is located at / oracle/app/oraInventory

'UpdateNodeList' was successful.

Oracle@XXXXXrac2:~ > / oracle/app/oracle/product/11.2.0/db_1/oui/bin/runInstaller-updateNodeList ORACLE_HOME=/oracle/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=XXXXXrac2"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed

The inventory pointer is located at / etc/oraInst.loc

The inventory is located at / oracle/app/oraInventory

'UpdateNodeList' was successful.

Check the entire deletion process

Grid@XXXXXrac2:~ > cluvfy stage-post nodedel-n XXXXXrac1-verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

The Oracle Clusterware is healthy on node "XXXXXrac2"

CRS integrity check passed

Result:

Node removal check passed

Post-check for node removal was successful.

= add node

The host installs the operating system.

Configure the basic environment for cluster installation

-- configure ssh mutual trust

Ssh mutual trust configuration for grid and oracle users

Mkdir / .ssh

Chmod 755 ~. Ssh

/ usr/bin/ssh-keygen-t rsa

/ usr/bin/ssh-keygen-t dsa

Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

Cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys

Ssh XXXXXrac2 cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys

Ssh XXXXXrac2 cat ~ / .ssh/id_dsa.pub > > ~ / .ssh/authorized_keys

Scp / .ssh/authorized_keys XXXXXrac2:~/.ssh

-- add grid

[export IGNORE_PREADDNODE_CHECKS=Y # optional]

Grid@XXXXXrac2:~ > / oracle/xxxx/grid/oui/bin/addNode.sh "CLUSTER_NEW_NODES= {XXXXXrac1}"CLUSTER_NEW_VIRTUAL_HOSTNAMES= {XXXXXrac1-vip}"

Performing pre-checks for node addition

Checking node reachability...

Node reachability check passed from node "XXXXXrac2"

Checking user equivalence...

User equivalence check passed for user "grid"

.

Saving inventory on nodes (Monday, March 19, 2018 4:13:26 PM CST)

. 100% Done.

Save inventory complete

WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.

To register the new inventory please run the script at'/ oracle/app/oraInventory/orainstRoot.sh' with root privileges on nodes' XXXXXrac1'.

If you do not register the inventory, you may not be able to update or patch the products you installed.

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/ oracle/app/oraInventory/orainstRoot.sh # On nodes XXXXXrac1

/ oracle/xxxx/grid/root.sh # On nodes XXXXXrac1

To execute the configuration scripts:

1. Open a terminal window

2. Log in as "root"

3. Run the scripts in each cluster node

The Cluster Node Addition of / oracle/xxxx/grid was successful.

Please check'/ tmp/silentInstall.log' for more details.

-- Node one runs the script

XXXXXrac1:~ # / oracle/app/oraInventory/orainstRoot.sh

Creating the Oracle inventory pointer file (/ etc/oraInst.loc)

Changing permissions of / oracle/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of / oracle/app/oraInventory to oinstall.

The execution of the script is complete.

XXXXXrac1:~ #

XXXXXrac1:~ #

XXXXXrac1:~ # / oracle/xxxx/grid/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= / oracle/xxxx/grid

Enter the full pathname of the local bin directory: [/ usr/local/bin]:

Copying dbhome to / usr/local/bin...

Copying oraenv to / usr/local/bin...

Copying coraenv to / usr/local/bin...

Creating / etc/oratab file...

Entries will be added to the / etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: / oracle/xxxx/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization-successful

Adding Clusterware entries to / etc/inittab

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node XXXXXrac2, number 2, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

Clscfg: EXISTING configuration version 5 detected.

Clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp' root'..

Operation successful.

Preparing packages for installation...

Cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for a Cluster... Succeeded

-- check whether the grid has been added successfully

Check whether the addition is successful

Grid@XXXXXrac2:~ > cluvfy stage-post nodeadd-n XXXXXrac1

Performing post-checks for node addition

Checking node reachability...

Node reachability check passed from node "XXXXXrac2"

Checking user equivalence...

User equivalence check passed for user "grid"

....

-- add rdbms

Oracle@XXXXXrac2:~ > / oracle/app/oracle/product/11.2.0/db_1/oui/bin/addNode.sh "CLUSTER_NEW_NODES= {XXXXXrac1}"

Performing pre-checks for node addition

Checking node reachability...

Node reachability check passed from node "XXXXXrac2"

Checking user equivalence...

User equivalence check passed for user "oracle"

WARNING:

Node "XXXXXrac1" already appears to be part of cluster

Pre-check for node addition was successful.

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 32767 MB Passed

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

Performing tests to see whether nodes XXXXXrac1 are available

... 100% Done.

.

Cluster Node Addition Summary

Global Settings

Source: / oracle/app/oracle/product/11.2.0/db_1

New Nodes

Space Requirements

New Nodes

.

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/ oracle/app/oracle/product/11.2.0/db_1/root.sh # On nodes XXXXXrac1

To execute the configuration scripts:

1. Open a terminal window

2. Log in as "root"

3. Run the scripts in each cluster node

The Cluster Node Addition of / oracle/app/oracle/product/11.2.0/db_1 was successful.

Please check'/ tmp/silentInstall.log' for more details.

Execute the root script after switching to root.

The above is an example analysis of Oracle 11g RAC re-adding nodes. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report