Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Manage 11Grac common commands

2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

1) check the cluster status:

[grid@rac02 ~] $crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

2), all Oracle instances-(database status):

[grid@rac02] $srvctl status database-d racdb

Instance racdb1 is running on node rac01

Instance racdb2 is running on node rac02

3) check the status of a single instance:

[grid@rac02] $srvctl status instance-d racdb-I racdb1

Instance racdb1 is running on node rac01

4), node application status:

[grid@rac02 ~] $srvctl status nodeapps

VIP rac01-vip is enabled

VIP rac01-vip is running on node: rac01

VIP rac02-vip is enabled

VIP rac02-vip is running on node: rac02

Network is enabled

Network is running on node: rac01

Network is running on node: rac02

GSD is disabled

GSD is not running on node: rac01

GSD is not running on node: rac02

ONS is enabled

ONS daemon is running on node: rac01

ONS daemon is running on node: rac02

EONS is enabled

EONS daemon is running on node: rac01

EONS daemon is running on node: rac02

5) list all configuration databases:

[grid@rac02 ~] $srvctl config database

Racdb

6). Database configuration:

[grid@rac02] $srvctl config database-d racdb-a

Database unique name: racdb

Database name: racdb

Oracle home: / u01/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: + RACDB_DATA/racdb/spfileracdb.ora

Domain: xzxj.edu.cn

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: racdb

Database instances: racdb1,racdb2

Disk Groups: RACDB_DATA,FRA

Services:

Database is enabled

Database is administrator managed

7), ASM status and ASM configuration:

[grid@rac02 ~] $srvctl status asm

ASM is running on rac01,rac02

[grid@rac02] $srvctl config asm-a

ASM home: / u01/app/11.2.0/grid

ASM listener: LISTENER

ASM is enabled.

8), TNS listener status and configuration:

[grid@rac02 ~] $srvctl status listener

Listener LISTENER is enabled

Listener LISTENER is running on node (s): rac01,rac02

[grid@rac02] $srvctl config listener-a

Name: LISTENER

Network: 1, Owner: grid

Home:

/ u01/app/11.2.0/grid on node (s) rac02,rac01

End points: TCP:1521

9), SCAN status and configuration:

[grid@rac02 ~] $srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node rac02

[grid@rac02 ~] $srvctl config scan

SCAN name: rac-scan.xzxj.edu.cn, Network: 1/192.168.1.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: / rac-scan.xzxj.edu.cn/192.168.1.55

10), VIP status and configuration of each node:

[grid@rac02] $srvctl status vip-n rac01

VIP rac01-vip is enabled

VIP rac01-vip is running on node: rac01

[grid@rac02] $srvctl status vip-n rac02

VIP rac02-vip is enabled

VIP rac02-vip is running on node: rac02

[grid@rac02] $srvctl config vip-n rac01

VIP exists.:rac01

VIP exists.: / rac01-vip/192.168.1.53/255.255.255.0/eth0

[grid@rac02] $srvctl config vip-n rac02

VIP exists.:rac02

VIP exists.: / rac02-vip/192.168.1.54/255.255.255.0/eth0

11), node application configuration-(VIP, GSD, ONS, listener)

[grid@rac02] $srvctl config nodeapps-a-g-s-l

-l option has been deprecated and will be ignored.

VIP exists.:rac01

VIP exists.: / rac01-vip/192.168.1.53/255.255.255.0/eth0

VIP exists.:rac02

VIP exists.: / rac02-vip/192.168.1.54/255.255.255.0/eth0

GSD exists.

ONS daemon exists. Local port 6100, remote port 6200

Name: LISTENER

Network: 1, Owner: grid

Home:

/ u01/app/11.2.0/grid on node (s) rac02,rac01

End points: TCP:1521

12) verify the clock synchronization among all cluster nodes:

[grid@rac02 ~] $cluvfy comp clocksync-verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...

Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...

Check: CTSS Resource running on all nodes

Node Name Status

-

Rac02 passed

Result: CTSS resource check passed

Querying CTSS for time offset on all nodes...

Result: Query of CTSS for time offset passed

Check CTSS state started...

Check: CTSS state

Node Name State

-

Rac02 Active

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...

Reference Time Offset Limit: 1000.0 msecs

Check: Reference Time Offset

Node Name Time Offset Status

Rac02 0.0 passed

Time offset is within the specified limits on the following set of nodes:

"[rac02]"

Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.

13), all running instances in the cluster-(SQL):

SELECT inst_id, instance_number inst_no, instance_name inst_name, parallel, status

Database_status db_status, active_state state, host_name host FROM gv$instance ORDER BY inst_id

14), all database files and the ASM disk group in which they reside-(SQL):

15), ASM disk volumes:

16), start and stop the cluster:

The following actions need to be performed by the root user.

(1) stop the Oracle Clusterware system on the local server:

[root@rac01 ~] # / u01/app/11.2.0/grid/bin/crsctl stop cluster

Note: after running the "crsctl stop cluster" command, if Oracle Clusterware manages

If any one of the resources is still running, the entire command fails. Use the-f option to unconditionally stop all resources and

Stop the Oracle Clusterware system.

Also note that Oracle Clusterware can be stopped on all servers in the cluster by specifying the-all option

The system. Stop the oracle clusterware system on rac01 and rac02 as follows:

[root@rac02] # / u01/app/11.2.0/grid/bin/crsctl stop cluster-all

Start the oralce clusterware system on the local server:

[root@rac01 ~] # / u01/app/11.2.0/grid/bin/crsctl start cluster

Note: you can start the Oracle Clusterware system on all servers in the cluster by specifying the-all option.

[root@rac02] # / u01/app/11.2.0/grid/bin/crsctl start cluster-all

You can also list one or more specified servers in the cluster by listing the servers separated by spaces

Start the Oracle Clusterware system on the server:

[root@rac01] # / u01/app/11.2.0/grid/bin/crsctl start cluster-n rac01 rac02

Start / stop all instances using SRVCTL:

[oracle@rac01] # srvctl stop database-d racdb

[oracle@rac01] # srvctl start database-d racdb

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report