Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

11g RAC cluster startup and shutdown, various resource checks, configuration information view summary.

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Brief:

One: startup and shutdown of the cluster

1. Manual startup of rac cluster

[root@node1 bin] #. / crsctl start cluster-all

two。 View the status of the rac cluster

[root@node1 bin] #. / crsctl stat res-t

3. Shutdown of rac cluster

[root@node1 bin] #. / crscrl stop cluster-all

-

Second, check the status of various resources in the cluster

1. Check the health of the cluster

[root@node1 bin] #. / crsctl check cluster

two。 Check the running status of the database instance of the cluster

[root@node1 bin] #. / srvctl status database-d orcldb

3. Check the running status of the node asm instance

[root@node1 bin] #. / srvctl status asm

4. Check the running status of node applications

[root@node1 bin] #. / srvctl status nodeapps

5. Check the running status of node monitoring

[root@node1 bin] #. / srvctl status listener

6. Check the running status of scan snooping

[root@node1 bin] #. / srvctl status scan

7. Check clock synchronization between all cluster nodes (executed by non-root users)

[oracle@node1 ~] $cluvfy comp clocksync-verbose

-

Third, check the configuration information of the cluster.

1. View database configuration information.

[root@node1 bin] #. / srvctl config database-d orcldb

two。 View application configuration information

[root@node1 bin] #. / srvctl config nodeapps

3. View asm configuration information

[root@node1 bin] #. / srvctl config asm

4. View monitoring configuration information

[root@node1 bin] #. / srvctl config listener

5. View scan configuration information

[root@node1 bin] #. / srvctl config scan

6. View RAC registry disk configuration information

[root@node1 bin] #. / ocrcheck

7. View RAC quorum disk configuration information

[root@node1 bin] #. / crsctl query css votedisk

-

The following is the detailed operation output information

11g rac R2 startup (default boot automatically) shutdown requires root user maintenance.

-- the path to execute the command

[root@node1 bin] # pwd

/ u01/app/11.2.0/grid/bin

1: normal startup and shutdown of 11g rac cluster.

-- Manual startup of rac cluster

[root@node1 bin] #. / crsctl start cluster-all

CRS-2672: Attempting to start 'ora.cssdmonitor' on' node2'

CRS-2672: Attempting to start 'ora.cssdmonitor' on' node1'

CRS-2676: Start of 'ora.cssdmonitor' on' node2' succeeded

CRS-2672: Attempting to start 'ora.cssd' on' node2'

CRS-2672: Attempting to start 'ora.diskmon' on' node2'

CRS-2676: Start of 'ora.cssdmonitor' on' node1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on' node1'

CRS-2672: Attempting to start 'ora.diskmon' on' node1'

CRS-2676: Start of 'ora.diskmon' on' node2' succeeded

CRS-2676: Start of 'ora.diskmon' on' node1' succeeded

CRS-2676: Start of 'ora.cssd' on' node2' succeeded

CRS-2676: Start of 'ora.cssd' on' node1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on' node2'

CRS-2672: Attempting to start 'ora.ctssd' on' node1'

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on' node1'

CRS-2676: Start of 'ora.ctssd' on' node2' succeeded

CRS-2672: Attempting to start 'ora.evmd' on' node2'

CRS-2676: Start of 'ora.ctssd' on' node1' succeeded

CRS-2672: Attempting to start 'ora.evmd' on' node1'

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on' node2'

CRS-2676: Start of 'ora.evmd' on' node2' succeeded

CRS-2676: Start of 'ora.evmd' on' node1' succeeded

CRS-2676: Start of 'ora.cluster_interconnect.haip' on' node1' succeeded

CRS-2676: Start of 'ora.cluster_interconnect.haip' on' node2' succeeded

CRS-2672: Attempting to start 'ora.asm' on' node2'

CRS-2672: Attempting to start 'ora.asm' on' node1'

CRS-2676: Start of 'ora.asm' on' node1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on' node1'

CRS-2676: Start of 'ora.crsd' on' node1' succeeded

CRS-2676: Start of 'ora.asm' on' node2' succeeded

CRS-2672: Attempting to start 'ora.crsd' on' node2'

CRS-2676: Start of 'ora.crsd' on' node2' succeeded

-- View the status of the rac cluster

[root@node1 bin] #. / crsctl stat res-t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

Ora.DATA.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

Ora.FLASH.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

Ora.GRIDDG.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

Ora.ORCLDB.lsnr

ONLINE ONLINE node1

ONLINE ONLINE node2

Ora.asm

ONLINE ONLINE node1 Started

ONLINE ONLINE node2 Started

Ora.gsd

OFFLINE OFFLINE node1

OFFLINE OFFLINE node2

Ora.net1.network

ONLINE ONLINE node1

ONLINE ONLINE node2

Ora.ons

ONLINE ONLINE node1

ONLINE ONLINE node2

Ora.registry.acfs

ONLINE ONLINE node1

ONLINE ONLINE node2

Cluster Resources

Ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1

Ora.cvu

1 ONLINE ONLINE node1

Ora.node1.vip

1 ONLINE ONLINE node1

Ora.node2.vip

1 ONLINE ONLINE node2

Ora.oc4j

1 ONLINE ONLINE node1

Ora.orcldb.db

1 ONLINE ONLINE node1 Open

2 ONLINE ONLINE node2 Open

Ora.scan1.vip

1 ONLINE ONLINE node1

-- shutdown of rac cluster

[root@node1 bin] #. / crscrl stop cluster-all

-bash:. / crscrl: No such file or directory

[root@node1 bin] #. / crsctl stop cluster-all

CRS-2673: Attempting to stop 'ora.crsd' on' node1'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on' node1'

CRS-2673: Attempting to stop 'ora.oc4j' on' node1'

CRS-2673: Attempting to stop 'ora.cvu' on' node1'

CRS-2673: Attempting to stop 'ora.ORCLDB.lsnr' on' node1'

CRS-2673: Attempting to stop 'ora.GRIDDG.dg' on' node1'

CRS-2673: Attempting to stop 'ora.registry.acfs' on' node1'

CRS-2673: Attempting to stop 'ora.orcldb.db' on' node1'

CRS-2677: Stop of 'ora.cvu' on' node1' succeeded

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on' node1'

CRS-2677: Stop of 'ora.ORCLDB.lsnr' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.node1.vip' on' node1'

CRS-2677: Stop of 'ora.scan1.vip' on' node1' succeeded

CRS-2677: Stop of 'ora.orcldb.db' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.DATA.dg' on' node1'

CRS-2673: Attempting to stop 'ora.FLASH.dg' on' node1'

CRS-2677: Stop of 'ora.node1.vip' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.crsd' on' node2'

CRS-2677: Stop of 'ora.registry.acfs' on' node1' succeeded

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.GRIDDG.dg' on' node2'

CRS-2673: Attempting to stop 'ora.registry.acfs' on' node2'

CRS-2673: Attempting to stop 'ora.orcldb.db' on' node2'

CRS-2673: Attempting to stop 'ora.ORCLDB.lsnr' on' node2'

CRS-2677: Stop of 'ora.FLASH.dg' on' node1' succeeded

CRS-2677: Stop of 'ora.DATA.dg' on' node1' succeeded

CRS-2677: Stop of 'ora.GRIDDG.dg' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on' node1'

CRS-2677: Stop of 'ora.ORCLDB.lsnr' on' node2' succeeded

CRS-2673: Attempting to stop 'ora.node2.vip' on' node2'

CRS-2677: Stop of 'ora.asm' on' node1' succeeded

CRS-2677: Stop of 'ora.node2.vip' on' node2' succeeded

CRS-2677: Stop of 'ora.oc4j' on' node1' succeeded

CRS-2677: Stop of 'ora.GRIDDG.dg' on' node2' succeeded

CRS-2677: Stop of 'ora.registry.acfs' on' node2' succeeded

CRS-2677: Stop of 'ora.orcldb.db' on' node2' succeeded

CRS-2673: Attempting to stop 'ora.DATA.dg' on' node2'

CRS-2673: Attempting to stop 'ora.FLASH.dg' on' node2'

CRS-2677: Stop of 'ora.DATA.dg' on' node2' succeeded

CRS-2677: Stop of 'ora.FLASH.dg' on' node2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on' node2'

CRS-2677: Stop of 'ora.asm' on' node2' succeeded

CRS-2673: Attempting to stop 'ora.ons' on' node2'

CRS-2677: Stop of 'ora.ons' on' node2' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on' node2'

CRS-2677: Stop of 'ora.net1.network' on' node2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node2' has completed

CRS-2673: Attempting to stop 'ora.ons' on' node1'

CRS-2677: Stop of 'ora.ons' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on' node1'

CRS-2677: Stop of 'ora.net1.network' on' node1' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed

CRS-2677: Stop of 'ora.crsd' on' node2' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on' node2'

CRS-2673: Attempting to stop 'ora.evmd' on' node2'

CRS-2673: Attempting to stop 'ora.asm' on' node2'

CRS-2677: Stop of 'ora.crsd' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on' node1'

CRS-2673: Attempting to stop 'ora.evmd' on' node1'

CRS-2673: Attempting to stop 'ora.asm' on' node1'

CRS-2677: Stop of 'ora.evmd' on' node2' succeeded

CRS-2677: Stop of 'ora.evmd' on' node1' succeeded

CRS-2677: Stop of 'ora.ctssd' on' node2' succeeded

CRS-2677: Stop of 'ora.ctssd' on' node1' succeeded

CRS-2677: Stop of 'ora.asm' on' node2' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on' node2'

CRS-2677: Stop of 'ora.asm' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on' node1'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on' node1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on' node1'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on' node2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on' node2'

CRS-2677: Stop of 'ora.cssd' on' node1' succeeded

CRS-2677: Stop of 'ora.cssd' on' node2' succeeded

-

Second: various resource status check commands of 11g rac cluster.

1. View crsctl help information.

[root@node1 bin] #. / crsctl

Usage: crsctl []

Command: enable | disable | config | stop | relocate | replace | status | add | delete | modify | getperm | setperm | check | set | get | unset | debug | lsmodules | query | pin | unpin | discover | release | request

For complete usage, use:

Crsctl [- h |-- help]

For detailed help on each command and object and its options use:

Crsctl-h e.g. Crsctl relocate resource-h

-- help to introduce in detail

[root@node1 bin] #. / crsctl-h

Usage: crsctl add-add a resource, type or other entity

Crsctl check-check a service, resource or other entity

Crsctl config-output autostart configuration

Crsctl debug-obtain or modify debug state

Crsctl delete-delete a resource, type or other entity

Crsctl disable-disable autostart

Crsctl discover-discover DHCP server

Crsctl enable-enable autostart

Crsctl get-get an entity value

Crsctl getperm-get entity permissions

Crsctl lsmodules-list debug modules

Crsctl modify-modify a resource, type or other entity

Crsctl query-query service state

Crsctl pin-pin the nodes in the node list

Crsctl relocate-relocate a resource, server or other entity

Crsctl replace-replaces the location of voting files

Crsctl release-release a DHCP lease

Crsctl request-request a DHCP lease

Crsctl setperm-set entity permissions

Crsctl set-set an entity value

Crsctl start-start a resource, server or other entity

Crsctl status-get status of a resource or other entity

Crsctl stop-stop a resource, server or other entity

Crsctl unpin-unpin the nodes in the node list

Crsctl unset-unset an entity value, restoring its default

two。 Check the health of the cluster

[root@node1 bin] #. / crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

3. Check the running status of the database instance of the cluster

[root@node1 bin] #. / srvctl status database-d orcldb

Instance orcldb1 is running on node node1

Instance orcldb2 is running on node node2

4. Check the running status of node applications

[root@node1 bin] #. / srvctl status nodeapps

VIP node1-vip is enabled

VIP node1-vip is running on node: node1

VIP node2-vip is enabled

VIP node2-vip is running on node: node2

Network is enabled

Network is running on node: node1

Network is running on node: node2

GSD is disabled

GSD is not running on node: node1

GSD is not running on node: node2

ONS is enabled

ONS daemon is running on node: node1

ONS daemon is running on node: node2

5. Check the running status of the node asm instance

[root@node1 bin] #. / srvctl status asm

ASM is running on node2,node1

6. Check the running status of node monitoring

[root@node1 bin] #. / srvctl status listener

Listener ORCLDB is enabled

Listener ORCLDB is running on node (s): node2,node1

7. Check the running status of scan snooping

[root@node1 bin] #. / srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node node1

8. Check clock synchronization between all cluster nodes (executed by non-root users)

[oracle@node1 ~] $cluvfy comp clocksync-verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...

Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...

Check: CTSS Resource running on all nodes

Node Name Status

-

Node1 passed

Result: CTSS resource check passed

Querying CTSS for time offset on all nodes...

Result: Query of CTSS for time offset passed

Check CTSS state started...

Check: CTSS state

Node Name State

-

Node1 Active

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...

Reference Time Offset Limit: 1000.0 msecs

Check: Reference Time Offset

Node Name Time Offset Status

Node1 0.0 passed

Time offset is within the specified limits on the following set of nodes:

"[node1]"

Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.

-

Check all kinds of configuration information of the cluster

1. View srvctl help information.

[root@node1 bin] #. / srvctl

Usage: srvctl []

Commands: enable | disable | start | stop | relocate | status | add | remove | modify | getenv | setenv | unsetenv | config | convert | upgrade

Objects: database | instance | service | nodeapps | vip | network | asm | diskgroup | listener | srvpool | scan | scan_listener | oc4j | home | filesystem | gns | cvu

For detailed help on each command and object and its options use:

Srvctl-h or

Srvctl-h

-- detailed introduction of the srvctl command

[root@node1 bin] #. / srvctl-h

Usage: srvctl [- V]

Usage: srvctl add database-d-o [- c {RACONENODE | RAC | SINGLE} [- e] [- I] [- w]] [- m] [- p] [- r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [- s] [- t] [- n] [- y {AUTOMATIC | MANUAL | NORESTART}] [- g ""] [- x] [- a ""] [- j "]

Usage: srvctl config database [- d [- a]] [- v]

Usage: srvctl start database-d [- o] [- n]

Usage: srvctl stop database-d [- o] [- f]

Usage: srvctl status database-d [- f] [- v]

Usage: srvctl enable database-d [- n]

Usage: srvctl disable database-d [- n]

Usage: srvctl modify database-d [- n] [- o] [- u] [- e] [- w] [- m] [- p] [- r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [- s] [- t] [- y {AUTOMATIC | MANUAL | NORESTART}] [- g "" [- x]] [- a "|-z] [- j"] [- f]

Usage: srvctl remove database-d [- f] [- y]

Usage: srvctl getenv database-d [- t ""]

Usage: srvctl setenv database-d {- t = [, =,...] |-T =}

Usage: srvctl unsetenv database-d-t ""

Usage: srvctl convert database-d-c RAC [- n]

Usage: srvctl convert database-d-c RACONENODE [- I] [- w]

Usage: srvctl relocate database-d {[- n] [- w] |-a [- r]} [- v]

Usage: srvctl upgrade database-d-o

Usage: srvctl downgrade database-d-o-t

Usage: srvctl add instance-d-I-n [- f]

Usage: srvctl start instance-d {- n [- I] |-I} [- o]

Usage: srvctl stop instance-d {- n |-I} [- o] [- f]

Usage: srvctl status instance-d {- n |-I} [- f] [- v]

Usage: srvctl enable instance-d-I ""

Usage: srvctl disable instance-d-I ""

Usage: srvctl modify instance-d-I {- n |-z}

Usage: srvctl remove instance-d-I [- f] [- y]

Usage: srvctl add service-d-s {- r "[- a"] [- P {BASIC | NONE | PRECONNECT}] |-g [- c {UNIFORM | SINGLETON}]} [- k] [- l [PRIMARY] [, PHYSICAL_STANDBY] [, LOGICAL_STANDBY] [ SNAPSHOT_STANDBY] [- y {AUTOMATIC | MANUAL}] [- Q {TRUE | FALSE}] [- x {TRUE | FALSE}] [- j {SHORT | LONG}] [- B {NONE | SERVICE_TIME | THROUGHPUT}] [- e {NONE | SESSION | SELECT}] [- m {NONE | BASIC}] [- z] [- w] [- t] [- f]

Usage: srvctl add service-d-s-u {- r "" |-a ""} [- f]

Usage: srvctl config service-d [- s] [- v]

Usage: srvctl enable service-d-s "[- I |-n]

Usage: srvctl disable service-d-s "[- I |-n]

Usage: srvctl status service-d [- s ""] [- f] [- v]

Usage: srvctl modify service-d-s-I-t [- f]

Usage: srvctl modify service-d-s-I-r [- f]

Usage: srvctl modify service-d-s-n-I "[- a"] [- f]

Usage: srvctl modify service-d-s [- g] [- c {UNIFORM | SINGLETON}] [- P {BASIC | NONE}] [- l [PRIMARY] [, PHYSICAL_STANDBY] [, LOGICAL_STANDBY] [ SNAPSHOT_STANDBY] [- y {AUTOMATIC | MANUAL}] [- Q {true | false}] [- x {true | false}] [- j {SHORT | LONG}] [- B {NONE | SERVICE_TIME | THROUGHPUT}] [- e {NONE | SESSION | SELECT}] [- m {NONE | BASIC}] [- z] [- w] [- t]

Usage: srvctl relocate service-d-s {- I-t |-c-n} [- f]

Usage: srvctl remove service-d-s [- I] [- f]

Usage: srvctl start service-d [- s "" [- n |-I]] [- o]

Usage: srvctl stop service-d [- s "" [- n |-I]] [- f]

Usage: srvctl add nodeapps {{- n-A / / [if1 [| if2...]]} | {- S / / [if1 [| if2...]]} [- e] [- l] [- r] [- t [:] [, [:]...] [- v]

Usage: srvctl config nodeapps [- a] [- g] [- s]

Usage: srvctl modify nodeapps {[- n-A / [/ if1 [| if2 |...] | [- S / [/ if1 [| if2 |...]]} [- u {static | dhcp | mixed}] [- e] [- l] [- r] [- t [:] [, [:]...]] [- v]

Usage: srvctl start nodeapps [- n] [- g] [- v]

Usage: srvctl stop nodeapps [- n] [- g] [- f] [- r] [- v]

Usage: srvctl status nodeapps

Usage: srvctl enable nodeapps [- g] [- v]

Usage: srvctl disable nodeapps [- g] [- v]

Usage: srvctl remove nodeapps [- f] [- y] [- v]

Usage: srvctl getenv nodeapps [- a] [- g] [- s] [- t ""]

Usage: srvctl setenv nodeapps {- t "= [, =,...]" |-T "="} [- v]

Usage: srvctl unsetenv nodeapps-t "[- v]

Usage: srvctl add vip-n-k-A / / [if1 [| if2...]] [- v]

Usage: srvctl config vip {- n |-I}

Usage: srvctl disable vip-I [- v]

Usage: srvctl enable vip-I [- v]

Usage: srvctl remove vip-I "" [- f] [- y] [- v]

Usage: srvctl getenv vip-I [- t ""]

Usage: srvctl start vip {- n |-I} [- v]

Usage: srvctl stop vip {- n |-I} [- f] [- r] [- v]

Usage: srvctl relocate vip-I [- n] [- f] [- v]

Usage: srvctl status vip {- n |-I} [- v]

Usage: srvctl setenv vip-I {- t "= [, =,...]" |-T "="} [- v]

Usage: srvctl unsetenv vip-I-t "[- v]

Usage: srvctl add network [- k]-S / / [if1 [| if2...]] [- w] [- v]

Usage: srvctl config network [- k]

Usage: srvctl modify network [- k] [- S / [/ if1 [| if2...] [- w] [- v]

Usage: srvctl remove network {- k |-a} [- f] [- v]

Usage: srvctl add asm [- l]

Usage: srvctl start asm [- n] [- o]

Usage: srvctl stop asm [- n] [- o] [- f]

Usage: srvctl config asm [- a]

Usage: srvctl status asm [- n] [- a] [- v]

Usage: srvctl enable asm [- n]

Usage: srvctl disable asm [- n]

Usage: srvctl modify asm [- l]

Usage: srvctl remove asm [- f]

Usage: srvctl getenv asm [- t [,...]]

Usage: srvctl setenv asm-t "= [,...]" |-T "="

Usage: srvctl unsetenv asm-t "[,...]"

Usage: srvctl start diskgroup-g [- n ""]

Usage: srvctl stop diskgroup-g [- n ""] [- f]

Usage: srvctl status diskgroup-g [- n ""] [- a] [- v]

Usage: srvctl enable diskgroup-g [- n ""]

Usage: srvctl disable diskgroup-g [- n ""]

Usage: srvctl remove diskgroup-g [- f]

Usage: srvctl add listener [- l] [- s] [- p "[TCP:] [,...] [/ IPC:] [/ NMP:] [/ TCPS:] [/ SDP:]"] [- o] [- k]

Usage: srvctl config listener [- l] [- a]

Usage: srvctl start listener [- l] [- n]

Usage: srvctl stop listener [- l] [- n] [- f]

Usage: srvctl status listener [- l] [- n] [- v]

Usage: srvctl enable listener [- l] [- n]

Usage: srvctl disable listener [- l] [- n]

Usage: srvctl modify listener [- l] [- o] [- p "[TCP:] [,...] [/ IPC:] [/ NMP:] [/ TCPS:] [/ SDP:]"] [- u] [- k]

Usage: srvctl remove listener [- l |-a] [- f]

Usage: srvctl getenv listener [- l] [- t [,...]]

Usage: srvctl setenv listener [- l]-t "= [,...]" |-T "="

Usage: srvctl unsetenv listener [- l]-t "[,...]"

Usage: srvctl add scan-n [- k] [- S / [/ if1 [| if2 |...]]

Usage: srvctl config scan [- I]

Usage: srvctl start scan [- I] [- n]

Usage: srvctl stop scan [- I] [- f]

Usage: srvctl relocate scan-I [- n]

Usage: srvctl status scan [- I] [- v]

Usage: srvctl enable scan [- I]

Usage: srvctl disable scan [- I]

Usage: srvctl modify scan-n

Usage: srvctl remove scan [- f] [- y]

Usage: srvctl add scan_listener [- l] [- s] [- p [TCP:] [/ IPC:] [/ NMP:] [/ TCPS:] [/ SDP:]]

Usage: srvctl config scan_listener [- I]

Usage: srvctl start scan_listener [- n] [- I]

Usage: srvctl stop scan_listener [- I] [- f]

Usage: srvctl relocate scan_listener-I [- n]

Usage: srvctl status scan_listener [- I] [- v]

Usage: srvctl enable scan_listener [- I]

Usage: srvctl disable scan_listener [- I]

Usage: srvctl modify scan_listener {- u |-p [TCP:] [/ IPC:] [/ NMP:] [/ TCPS:] [/ SDP:]}

Usage: srvctl remove scan_listener [- f] [- y]

Usage: srvctl add srvpool-g [- l] [- u] [- I] [- n ""] [- f]

Usage: srvctl config srvpool [- g]

Usage: srvctl status srvpool [- g] [- a]

Usage: srvctl status server-n "[- a]

Usage: srvctl relocate server-n ""-g [- f]

Usage: srvctl modify srvpool-g [- l] [- u] [- I] [- n ""] [- f]

Usage: srvctl remove srvpool-g

Usage: srvctl add oc4j [- v]

Usage: srvctl config oc4j

Usage: srvctl start oc4j [- v]

Usage: srvctl stop oc4j [- f] [- v]

Usage: srvctl relocate oc4j [- n] [- v]

Usage: srvctl status oc4j [- n] [- v]

Usage: srvctl enable oc4j [- n] [- v]

Usage: srvctl disable oc4j [- n] [- v]

Usage: srvctl modify oc4j-p [- v] [- f]

Usage: srvctl remove oc4j [- f] [- v]

Usage: srvctl start home-o-s-n

Usage: srvctl stop home-o-s-n [- t] [- f]

Usage: srvctl status home-o-s-n

Usage: srvctl add filesystem-d-v-g [- m] [- u]

Usage: srvctl config filesystem-d

Usage: srvctl start filesystem-d [- n]

Usage: srvctl stop filesystem-d [- n] [- f]

Usage: srvctl status filesystem-d [- v]

Usage: srvctl enable filesystem-d

Usage: srvctl disable filesystem-d

Usage: srvctl modify filesystem-d-u

Usage: srvctl remove filesystem-d [- f]

Usage: srvctl start gns [- l] [- n] [- v]

Usage: srvctl stop gns [- n] [- f] [- v]

Usage: srvctl config gns [- a] [- d] [- k] [- m] [- n] [- p] [- s] [- V] [- Q] [- l] [- v]

Usage: srvctl status gns [- n] [- v]

Usage: srvctl enable gns [- n] [- v]

Usage: srvctl disable gns [- n] [- v]

Usage: srvctl relocate gns [- n] [- v]

Usage: srvctl add gns-d-I [- v]

Usage: srvctl modify gns {- l | [- I] [- N-A] [- D-A] [- c-a] [- u] [- r] [- V] [- p: [,...]] [- F] [- R] [- X] [- v]}

Usage: srvctl remove gns [- f] [- v]

Usage: srvctl add cvu [- t]

Usage: srvctl config cvu

Usage: srvctl start cvu [- n]

Usage: srvctl stop cvu [- f]

Usage: srvctl relocate cvu [- n]

Usage: srvctl status cvu [- n]

Usage: srvctl enable cvu [- n]

Usage: srvctl disable cvu [- n]

Usage: srvctl modify cvu-t

Usage: srvctl remove cvu [- f]

two。 View database configuration information.

-- config database help

[root@node1 bin] #. / srvctl config database-h

Displays the configuration for the database.

Usage: srvctl config database [- d [- a]] [- v]

-d Unique name for the database

-a Print detailed configuration information

-v Verbose output

-h Print usage

-- View database configuration information.

[root@node1 bin] #. / srvctl config database-d orcldb

Database unique name: orcldb

Database name: orcldb

Oracle home: / u01/app/oracle/product/11.2.0/db_1

Oracle user: oracle

Spfile: + DATA/orcldb/spfileorcldb.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: orcldb

Database instances: orcldb1,orcldb2

Disk Groups: DATA,FLASH

Mount point paths:

Services:

Type: RAC

Database is administrator managed

3. View application configuration information

[root@node1 bin] #. / srvctl config nodeapps

Network exists: 1/10.0.0.0/255.0.0.0/eth0, type static

VIP exists: / node1-vip/10.100.25.10/10.0.0.0/255.0.0.0/eth0, hosting node node1

VIP exists: / node2-vip/10.100.25.11/10.0.0.0/255.0.0.0/eth0, hosting node node2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

4. View asm configuration information

[root@node1 bin] #. / srvctl config asm

ASM home: / u01/app/11.2.0/grid

ASM listener: ORCLDB

5. View monitoring configuration information

[root@node1 bin] #. / srvctl config listener

Name: ORCLDB

Network: 1, Owner: grid

Home:

End points: TCP:1521

6. View scan configuration information

[root@node1 bin] #. / srvctl config scan

SCAN name: scan-cluster.localdomain, Network: 1/10.0.0.0/255.0.0.0/eth0

SCAN VIP name: scan1, IP: / scan-cluster.localdomain/10.100.25.100

[root@node1 bin] #. / srvctl config scan_listener

SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

7. View RAC registry disk configuration information

[root@node1 bin] #. / ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 3

Total space (kbytes): 262120

Used space (kbytes): 3100

Available space (kbytes): 259020

ID: 1970085021

Device/File Name: + GRIDDG

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

8. View RAC quorum disk configuration information

[root@node1 bin] #. / crsctl query css votedisk

# # STATE File Universal Id File Name Disk group

1. ONLINE 97b3037ba6684f0bbf04fa53aa7efb37 (ORCL:VOL1) [GRIDDG]

Located 1 voting disk (s).

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report