Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Oracle 11g RAC command finishing

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Use SRVCTL to start / stop all instances: [oracle@rac01 ~] # srvctl stop database-d racdb [oracle@rac01 ~] # srvctl start database-d racdb

1. Check the cluster status:

[grid@rac02 ~] $crsctl check cluster CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online 2, all Oracle instances-(database status): [grid@rac02 ~] $srvctl status database-d racdb Instance racdb1 is running on node rac01 Instance racdb2 is running on node rac02 3, Check the status of a single instance: [grid@rac02 ~] $srvctl status instance-d racdb-I racdb1 Instance racdb1 is running on node rac01 4, Node application status: [grid@rac02 ~] $srvctl status nodeapps VIP rac01-vip is enabled VIP rac01-vip is running on node: rac01 VIP rac02-vip is enabled VIP rac02-vip is running on node: rac02 Network is enabled Network is running on node: rac01 Network is running on node: rac02 GSD is disabled GSD is not running on node: rac01 GSD is not running on node: rac02 ONS is enabled ONS daemon is Running on node: rac01 ONS daemon is running on node: rac02 eONS is enabled eONS daemon is running on node: rac01 eONS daemon is running on node: rac02 5 、 List all configuration databases: [grid@rac02 ~] $srvctl config database racdb 6, Database configuration: [grid@rac02 ~] $srvctl config database-d racdb-a Database unique name: racdb Database name: racdb Oracle home: / u01/app/oracle/product/11.2.0/dbhome_1 Oracle user: oracle Spfile: + RACDB_DATA/racdb/spfileracdb.ora Domain: xzxj.edu.cn Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: racdb Database instances: racdb1 Racdb2 Disk Groups: RACDB_DATA,FRA Services: Database is enabled Database is administrator managed 7, ASM status and ASM configuration: [grid@rac02 ~] $srvctl status asm ASM is running on rac01,rac02 [grid@rac02 ~] $srvctl config asm-an ASM home: / u01/app/11.2.0/grid ASM listener: LISTENER ASM is enabled. 8. TNS listener status and configuration: [grid@rac02 ~] $srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node (s): rac01,rac02 [grid@rac02 ~] $srvctl config listener-a Name: LISTENER Network: 1, Owner: grid Home: / u01/app/11.2.0/grid on node (s) rac02 Rac01 End points: TCP:1521 9, SCAN status and configuration: [grid@rac02 ~] $srvctl status scan SCAN VIP scan1 is enabled SCAN VIP scan1 is running on node rac02 [grid@rac02 ~] $srvctl config scan SCAN name: rac-scan.xzxj.edu.cn, Network: 1/192.168.1.0/255.255.255.0/eth0 SCAN VIP name: scan1 IP: / rac-scan.xzxj.edu.cn/192.168.1.55 10 、 VIP status and configuration of each node: [grid@rac02 ~] $srvctl status vip-n rac01 VIP rac01-vip is enabled VIP rac01-vip is running on node: rac01 [grid@rac02 ~] $srvctl status vip-n rac02 VIP rac02-vip is enabled VIP rac02-vip is running on node: rac02 [grid@rac02 ~] $srvctl config vip-n rac01 VIP exists.:rac01 VIP exists.: / rac01-vip/192. 168.1.53/255.255.255.0/eth0 [grid@rac02 ~] $srvctl config vip-n rac02 VIP exists.:rac02 VIP exists.: / rac02-vip/192.168.1.54/255.255.255.0/eth0 11, Node application configuration-(VIP, GSD, ONS, listener) [grid@rac02 ~] $srvctl config nodeapps-a-g-s-l-l option has been deprecated and will be ignored. VIP exists.:rac01 VIP exists.: / rac01-vip/192.168.1.53/255.255.255.0/eth0 VIP exists.:rac02 VIP exists.: / rac02-vip/192.168.1.54/255.255.255.0/eth0 GSD exists. ONS daemon exists. Local port 6100, remote port 6200 Name: LISTENER Network: 1, Owner: grid Home: / u01/app/11.2.0/grid on node (s) rac02,rac01 End points: TCP:1521 12, verify clock synchronization between all cluster nodes: [grid@rac02 ~] $cluvfy comp clocksync-verbose Verifying Clock Synchronization across the cluster nodes Checking if Clusterware is installed on all nodes... Check of Clusterware install passed Checking if CTSS Resource is running on all nodes... Check: CTSS Resource running on all nodes Node Name Status-rac02 passed Result: CTSS resource check passed Querying CTSS for time offset on all nodes... Result: Query of CTSS for time offset passed Check CTSS state started... Check: CTSS state Node Name State-rac02 Active CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Reference Time Offset Limit: 1000.0 msecs Check: Reference Time Offset Node Name Time Offset Status-rac02 passed Time offset is within the specified limits on the following set of nodes: "[rac02] "Result: Check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Verification of Clock Synchronization across the cluster nodes was successful. 13. All running instances in the cluster-(SQL): SELECT inst_id, instance_number inst_no, instance_name inst_name, parallel, status, database_status db_status, active_state state, host_name host FROM gv$instance ORDER BY inst_id; 14, all database files and their ASM disk groups-(SQL): 15, ASM disk volumes: 16, start and stop the cluster: the following operations need to be performed by root users. (1) stop the Oracle Clusterware system on the local server: [root@rac01 ~] # / u01/app/11.2.0/grid/bin/crsctl stop cluster Note: after running the "crsctl stop cluster" command, if any of the resources managed by Oracle Clusterware is still running, the whole command fails. Use the-f option to unconditionally stop all resources and stop the Oracle Clusterware system. Also note that the Oracle Clusterware system can be stopped on all servers in the cluster by specifying the-all option. Stop the oracle clusterware system on rac01 and rac02 as shown below: [root@rac02 ~] # / u01/app/11.2.0/grid/bin/crsctl stop cluster-all start the oralce clusterware system on the local server: [root@rac01 ~] # / u01/app/11.2.0/grid/bin/crsctl start cluster Note: you can start the Oracle Clusterware system on all servers in the cluster by specifying the-all option. [root@rac02 ~] # / u01/app/11.2.0/grid/bin/crsctl start cluster-all can also launch the Oracle Clusterware system on one or more specified servers in the cluster by listing the servers (separated by spaces): [root@rac01 ~] # / u01/app/11.2.0/grid/bin/crsctl start cluster-n rac01 rac02

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report