Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Root.sh Fails on the First Node for 11gR2 Grid Infrastructure Installation

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Execute the following error on the second node, and it is still useless to fix it according to the official prompt. Because the test environment uses a virtual machine to build a shared disk, it is finally found that the following error is caused by the shared disk problem, which is solved after the shared disk is replaced.

Node2:/u01/app/11.2.0/grid #. / root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= / u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/ usr/local/bin]:

Copying dbhome to / usr/local/bin...

Copying oraenv to / usr/local/bin...

Copying coraenv to / usr/local/bin...

Creating / etc/oratab file...

Entries will be added to the / etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: / u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization-successful

Adding Clusterware entries to inittab

CRS-2672: Attempting to start 'ora.mdnsd' on' node2'

CRS-2676: Start of 'ora.mdnsd' on' node2' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on' node2'

CRS-2676: Start of 'ora.gpnpd' on' node2' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on' node2'

CRS-2672: Attempting to start 'ora.gipcd' on' node2'

CRS-2676: Start of 'ora.cssdmonitor' on' node2' succeeded

CRS-2676: Start of 'ora.gipcd' on' node2' succeeded

CRS-2672: Attempting to start 'ora.cssd' on' node2'

CRS-2672: Attempting to start 'ora.diskmon' on' node2'

CRS-2676: Start of 'ora.diskmon' on' node2' succeeded

CRS-2676: Start of 'ora.cssd' on' node2' succeeded

Disk Group OCR_VOTE creation failed with the following message:

ORA-15018: diskgroup cannot be created

ORA-15017: diskgroup "OCR_VOTE" cannot be mounted

ORA-15003: diskgroup "OCR_VOTE" already mounted in another lock name space

Configuration of ASM... Failed

See asmca logs at / u01/app/grid/cfgtoollogs/asmca for details

Did not succssfully configure and start ASM at / u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6912.

/ u01/app/11.2.0/grid/perl/bin/perl-I/u01/app/11.2.0/grid/perl/lib-I/u01/app/11.2.0/grid/crs/install / u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

The following information is an official solution:

Root.sh Fails on the First Node for 11gR2 Grid Infrastructure Installation (document ID 1191783.1)

APPLIES TO:

Oracle Database-Enterprise Edition-Version 11.2.0.1 and later

Information in this document applies to any platform.

SYMPTOMS

Multiple nodes cluster, installing 11gR2 Grid Infrastructure for the first time, root.sh fails on the first node.

/ cfgtoollogs/crsconfig/rootcrs_.log shows:

2010-07-24 23:29:36: Configuring ASM via ASMCA

2010-07-24 23:29:36: Executing as oracle: / opt/grid/bin/asmca-silent-diskGroupName OCR-diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3-redundancy NORMAL-configureLocalASM

2010-07-24 23:29:36: Running as user oracle: / opt/grid/bin/asmca-silent-diskGroupName OCR-diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3-redundancy NORMAL-configureLocalASM

2010-07-24 23:29:36: Invoking "/ opt/grid/bin/asmca-silent-diskGroupName OCR-diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3-redundancy NORMAL-configureLocalASM" as user "oracle"

2010-07-24 23:29:53:Configuration failed, see logfile for details

$ORACLE_BASE/cfgtoollogs/asmca/asmca-.log shows error:

ORA-15018 diskgroup cannot be created

ORA-15017 diskgroup OCR_VOTING_DG cannot be mounted

ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space

This is a new installation, the disks used by ASM are not shared on any other cluster system.

CHANGES

New installation.

CAUSE

The problem is caused by runing root.sh simultaneously on the first node and the remaining node (s) rather than completing root.sh on the first node before running it on the remaining node (s).

On node 2, / cfgtoollogs/crsconfig/rootcrs_.log has approximate same time stamp:

2010-07-24 23:29:39: Configuring ASM via ASMCA

2010-07-24 23:29:39: Executing as oracle: / opt/grid/bin/asmca-silent-diskGroupName OCR-diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3-redundancy NORMAL-configureLocalASM

2010-07-24 23:29:39: Running as user oracle: / opt/grid/bin/asmca-silent-diskGroupName OCR-diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3-redundancy NORMAL-configureLocalASM

2010-07-24 23:29:39: Invoking "/ opt/grid/bin/asmca-silent-diskGroupName OCR-diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3-redundancy NORMAL-configureLocalASM" as user "oracle"

2010-07-24 23:29:55:Configuration failed, see logfile for details

It has similar content, the only difference is its time started 3 seconds later than the first node. This indicates root.sh was running simultaneously on both nodes.

The root.sh on the 2nd node also created + ASM1 instance (as it also appears as it is the first node to run root.sh), it mounted the same diskgroup, led to the + ASM1 on node 1 reporting:

ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space

SOLUTION

1. Deconfig the Grid Infrastructure without removing binaries, refer to Document 942166.1 How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation. For two nodes case:

As root, run "$GRID_HOME/crs/install/rootcrs.pl-deconfig-force-verbose" on node 1

As root, run "$GRID_HOME/crs/install/rootcrs.pl-verbose-deconfig-force-lastnode" on node 2.

2. Rerun root.sh on the first node first, only proceed with the remaining node (s) after root.sh completes on the first node.

REFERENCES

NOTE:1050908.1-Troubleshoot Grid Infrastructure Startup Issues

NOTE:942166.1-How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report