Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ORACLE11GR2 RAC file system changed to ASM EXTEND RAC and high availability test

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

There has always been the idea of playing with ASM EXTEND RAC, suffering from the lack of resources to test, and so on.

God lives up to the thoughtful man ~ ! Finally have the resources to play.

2 sets of storage: EMS and HDS, respectively in different computer rooms.

Because the original test system uses a file system, it should be changed to ASM before creating an ASM EXTEND RAC.

This revision to ASM EXTEND RAC encountered a series of problems, although there was some trouble in solving these problems, but after the successful completion of EXTEND RAC, there was an inexplicable sense of achievement.

Do you feel the same way after the big problem is solved? hehe.

1. System environment

1.1 OS and DB versions

Host OS version: AIX 7.1 ("7100-02-03-1334")

ORACLE version: oracle 11.2.0.3 PSU10

Whether RAC: yes

Number of nodes: 4

Storage: HDS 100G ·EMS 50g

ASM or file system: Symantec VERITAS volume management tool to build cluster file system

1.2 hardware

RAM: 128

SWAP: 13G

1.3 AIX / TMP file system

8GB

1.4 AIX JDK & JRE

IBM JDK 1.6.0.00 (64 BIT)

1.5 Catalog details

/ oracle 50GB

/ oraclelog 30GB

/ ocrvote 2G

/ archivelog 400G

/ oradata 850

1.6Host IP configuration information

100.15.64.180 testdb1

100.15.64.181 testdb2

100.15.64.182 testdb3

100.15.64.183 testdb4

100.15.64.184 testdb1-vip

100.15.64.185 testdb2-vip

100.15.64.186 testdb3-vip

100.15.64.187 testdb4-vip

100.15.64.188 testdb-scan

7.154.64.1 testdb1-priv

7.154.64.2 testdb2-priv

7.154.64.3 testdb3-priv

7.154.64.4 testdb4-priv

2. Replace the file system with ASM

2.1 disk permissions and attribute modification

Chown grid:asmadmin / dev/vx/rdmp/remc0_04a1

Chown grid:asmadmin / dev/vx/rdmp/rhitachi_v0_11cd

Chmod 660 / dev/vx/rdmp/remc0_04a1

Chmod 660 / dev/vx/rdmp/rhitachi_v0_11cd

(note: since the test library uses Symantec's storage multipath software, there is no need to modify disk properties.)

2.2 create an ASM instance

Su-grid

Export DISPLAY=100.15.70.169:0.0

Asmca

(note: to create an OCTVOTE disk group, select NORMAL redundancy, create 2 failure groups and at least 3 disks. It is recommended to select 3 disks. If the failure group of asm has more than 3 disks, votedisk will only use 3 of them when migrating to this disk group. Using crsctl query css votedisk, you can only see votedisk on three disks. The free space of a disk group is subject to the smallest total size of its failure group)

2.3Create ASM disk group SYSDG,DATADG and modify disk group parameters

Su-grid

Export DISPLAY=100.15.70.169:0.0

Asmca

Note: the storage on the same side is placed in a fault group.

The ASM after oracle 11G needs to modify the compatible parameter of rdbms to 11.2.0.0. This parameter defaults to 10.2.0.0. If this parameter is not modified, if two fault groups are used later, after one of the fault groups is repaired, the following error will be reported when the fault group is online online:

ORA-15283: ASM operation requires compatible.rdbms of 11.1.0.0.0 or higher

Modify the command:

Alter diskgroup SYSDG set attribute 'compatible.rdbms'='11.2.0.0'

Select name,COMPATIBILITY,DATABASE_COMPATIBILITY from v$asm_diskgroup

-compatibility corresponds to the version of asm

DATABASE_COMPATIBILITY-compatible database version

2.4 migrate file system data files to ASM

Since no library is built in this test, data file migration is not involved. If you need to migrate, use RMAN to implement it.

2.5 migrate OCR,VOTEDISK to disk group OCRVOTE

1) check ocr and votedisk

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 3

Total space (kbytes): 262120

Used space (kbytes): 3296

Available space (kbytes): 258824

ID: 1187520997

Device/File Name: / ocrvote/ocr1

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/crsctl query css votedisk

# # STATE File Universal Id File Name Disk group

1. ONLINE a948649dc0e14f65bf171ba2ca496962 (/ ocrvote/votedisk1) []

2. ONLINE a5f290d560684f47bf82eb3d34db5fc7 (/ ocrvote/votedisk2) []

3. ONLINE 49617fb984fc4fcdbf5b7566a9e1778f (/ ocrvote/votedisk3) []

Located 3 voting disk (s).

2) View the status of resources

$crsctl stat res-t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

Ora.DATADG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.LISTENER.lsnr

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.OCRVOTE.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.SYSDG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.asm

ONLINE ONLINE testdb1 Started

ONLINE ONLINE testdb2 Started

ONLINE ONLINE testdb3 Started

ONLINE ONLINE testdb4 Started

Ora.gsd

OFFLINE OFFLINE testdb1

OFFLINE OFFLINE testdb2

OFFLINE OFFLINE testdb3

OFFLINE OFFLINE testdb4

Ora.net1.network

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.ons

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.registry.acfs

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Cluster Resources

Ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE testdb1

Ora.cvu

1 ONLINE ONLINE testdb1

Ora.oc4j

1 ONLINE ONLINE testdb1

Ora.scan1.vip

1 ONLINE ONLINE testdb1

Ora.testdb1.vip

1 ONLINE ONLINE testdb1

Ora.testdb2.vip

1 ONLINE ONLINE testdb2

Ora.testdb3.vip

1 ONLINE ONLINE testdb3

Ora.testdb4.vip

1 ONLINE ONLINE testdb4

3) backup OCR

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/ocrconfig-manualbackup

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/ocrconfig-showbackup

4) add OCR to the disk group and delete the OCR from the original file system

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/ocrconfig-add + OCRVOTE

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 3

Total space (kbytes): 262120

Used space (kbytes): 3336

Available space (kbytes): 258784

ID: 1187520997

Device/File Name: / ocrvote/ocr1

Device/File integrity check succeeded

Device/File Name: + OCRVOTE

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/ocrconfig-delete / ocrvote/ocr1

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/ocrcheck

Status of Oracle Cluster Registry is as follows:

Version: 3

Total space (kbytes): 262120

Used space (kbytes): 3336

Available space (kbytes): 258784

ID: 1187520997

Device/File Name: + OCRVOTE

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

5) migrate votedisk to the file system

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/crsctl query css votedisk

# # STATE File Universal Id File Name Disk group

1. ONLINE a948649dc0e14f65bf171ba2ca496962 (/ ocrvote/votedisk1) []

2. ONLINE a5f290d560684f47bf82eb3d34db5fc7 (/ ocrvote/votedisk2) []

3. ONLINE 49617fb984fc4fcdbf5b7566a9e1778f (/ ocrvote/votedisk3) []

Located 3 voting disk (s).

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/crsctl replace votedisk + OCRVOTE

CRS-4256: Updating the profile

Successful addition of voting disk 3a5e5e8622024f17bf0c1a4594e303f5.

Successful addition of voting disk 92ff4555f7064f70bf3c022bd687dbc5.

Successful addition of voting disk 19a1fed74b7f4fb6bf780d43b5427dc9.

Successful deletion of voting disk a948649dc0e14f65bf171ba2ca496962.

Successful deletion of voting disk a5f290d560684f47bf82eb3d34db5fc7.

Successful deletion of voting disk 49617fb984fc4fcdbf5b7566a9e1778f.

Successfully replaced voting disk group with + OCRVOTE.

CRS-4256: Updating the profile

CRS-4266: Voting file (s) successfully replaced

Root@testdb1:/#/oracle/app/11.2.0/grid/bin/crsctl query css votedisk

# # STATE File Universal Id File Name Disk group

1. ONLINE 3a5e5e8622024f17bf0c1a4594e303f5 (/ dev/vx/rdmp/emc0_04a1) [OCRVOTE]

2. ONLINE 92ff4555f7064f70bf3c022bd687dbc5 (/ dev/vx/rdmp/hitachi_vsp0_11cc) [OCRVOTE]

3. ONLINE 19a1fed74b7f4fb6bf780d43b5427dc9 (/ dev/vx/rdmp/emc0_04c1) [OCRVOTE]

Located 3 voting disk (s).

3. Add NFS to disk group OCTVOTE as the third arbitration disk

Asm extend rac needs to place a linux pc server outside of 2 sets of storage and create a file system on this server. Mount this file system to the server side of asm extend rac as NFS, and you need to use the dd command to generate a disk on NFS.

3.1NFS server information

System version: Linux el5 x86x64

3.2NFS server creates grid user

Groupadd-g 1000 oinstall

Groupadd-g 1100 asmadmin

Useradd-u 1100-g oinstall-G oinstall,asmadmin-d / home/grid-c "GRID Software Owner" grid

Note: it is recommended that the user ID and group ID of the nfs server be consistent with the production database

Create a directory on the NFS server and empower it, and DD a disk.

Cd / oradata

Mkdir votedisk

Chown 1100:1100 votedisk

3.4 modify the / etc/exports file on the NFS server and restart NFS

Vi / etc/exports

Add the following line

/ oradata/votedisk * (rw,sync,all_squash,anonuid=1100,anongid=1100)

Service nfs stop

Service nfs start

3.5 check to see if nfs contains the new votedisk directory

[root@ywtcdb] # exportfs-v

/ oradata 100.15.64.* (rw,wdelay,no_root_squash,no_subtree_check,anonuid=65534,anongid=65534)

/ oradata/votedisk

(rw,wdelay,root_squash,all_squash,no_subtree_check,anonuid=1100,anongid=1100)

(note: the red part is the new part)

3.6 modify the / etc/filesystems file of the production host and set the directory to be mounted automatically and randomly (each node is running)

Su-root

Mkdir / voting_disk

Chown grid:asmadmin / voting_disk

Vi / etc/filesystems

The following contents are added:

/ voting_disk:

Dev = "/ oradata/votedisk"

Vfs = nfs

Nodename = ywtcdb

Mount = true

Options = rw,bg,hard,intr,rsize=32768,wsize=32768,timeo=600,vers=3,proto=tcp,noac,sec=sys

Account = false

(note: configure strictly according to the existing options of / etc/filesystems, including punctuation marks, spaces, etc. It is recommended to use the smit nfs command for nfs configuration, and modify the options attribute of the corresponding mount directory in the / etc/filesystems file after the command configuration is completed. The options attribute must be rw,bg,hard,intr,rsize=32768,wsize=32768,timeo=600,vers=3,proto=tcp,noac,sec=sys)

Use the smit nfs command to set up automatic mount of nfs

# smit nfs

[TOP] [Entry Fields]

* Pathname of mount point [/ voting_disk]

* Pathname of remote directory [/ oradata/votedisk]

* Host where remote directory resides [ywtcdb]

Mount type name []

* Security method [sys]

* Mount now, add entry to / etc/filesystems or both? Both

* / etc/filesystems entry will mount the directory yes

3.7 manually mount the directory (run on each node)

/ usr/sbin/nfso-p-o nfs_use_reserved_ports=1

Or nfso-p-o nfs_use_reserved_ports=1

Su-root

Mount-v nfs-o rw,bg,hard,intr,rsize=32768,wsize=32768,timeo=600,vers=3,proto=tcp,noac,sec=sys 100.15.57.125:/oradata/votedisk / voting_disk

Note: 100.15.57.125 in the command asks the IP of the NFS server, / oradata/votedisk is the directory of the NFS server, and / voting_disk is the directory of the production host.

3.8 generate a disk (any production node) using the dd command

Dd if=/dev/zero of=/voting_disk/vote_disk_nfs bs=1M count=1000

3.9 add the newly generated disk to the disk group OCRVOTE

Su-grid

Export DISPLAY=100.15.70.169:0.0

Asmca

Change the Disk Discovery Path first in asmca

Before modification:

/ dev/vx/rdmp/*

After modification:

/ voting_disk/vote_disk_nfs, / dev/vx/rdmp/*

Add disk / voting_disk/vote_disk_nfs to a new failure group in disk group OCRVOTE, and after adding it, we can see that disk group OCRVOTE has three failure groups.

3.10 check whether votedisk is on the new disk.

$crsctl query css votedisk

# # STATE File Universal Id File Name Disk group

1. ONLINE 89210622f0864ff0bf9517205691e679 (/ voting_disk/vote_disk_nfs) [OCRVOTE]

2. ONLINE 55c4ee685a824ff3bf6ce510bf09468e (/ dev/vx/rdmp/remc0_04a1) [OCRVOTE]

3. ONLINE 159234e88fe64f55bf0d4571362c3b07 (/ dev/vx/rdmp/ rhitachi_v0_11cd) [OCRVOTE]

Located 3 voting disk (s).

3.11 start the construction of the library, and after the completion of the construction of the database, the ASM EXTEND RAC creation is completed

4.ASM EXTEND RAC High availability Test

4.1Unplug the EMC storage fiber of Node 1 and Node 2 to simulate storage downtime

The css log is as follows:

Node 1RV:

2014-05-2014: 46: 44.886:

[cssd (4129042)] CRS-1649:An I CSSNM00060 O error occured for voting file: / dev/remc0_04a5; details at (: CSSNM00060:) in / oracle/app/11.2.0/grid/log/testdb1/cssd/ocssd.log.

2014-05-2014: 46: 44.886:

[cssd (4129042)] CRS-1649:An I CSSNM00059 O error occured for voting file: / dev/remc0_04a5; details at (: CSSNM00059:) in / oracle/app/11.2.0/grid/log/testdb1/cssd/ocssd.log.

2014-05-2014: 46: 46.051:

[cssd (4129042)] CRS-1626:A Configuration change request completed successfully

2014-05-2014: 46: 46.071:

[cssd (4129042)] CRS-1601:CSSD Reconfiguration complete. Active nodes are testdb1 testdb2 testdb3 testdb4.

Node 2:

2014-05-2014: 46: 46.053:

[cssd (4195026)] CRS-1604:CSSD voting file is offline: / dev/remc0_04a5; details at (: CSSNM00069:) in / oracle/app/11.2.0/grid/log/testdb2/cssd/ocssd.log.

2014-05-2014: 46: 46.053:

[cssd (4195026)] CRS-1626:A Configuration change request completed successfully

2014-05-2014: 46: 46.071:

[cssd (4195026)] CRS-1601:CSSD Reconfiguration complete. Active nodes are testdb1 testdb2 testdb3 testdb4.

Node 3:

2014-05-2014: 46: 46.053:

[cssd (3604942)] CRS-1604:CSSD voting file is offline: / dev/remc0_04a5; details at (: CSSNM00069:) in / oracle/app/11.2.0/grid/log/testdb3/cssd/ocssd.log.

2014-05-2014: 46: 46.053:

[cssd (3604942)] CRS-1626:A Configuration change request completed successfully

2014-05-2014: 46.074: 46

[cssd (3604942)] CRS-1601:CSSD Reconfiguration complete. Active nodes are testdb1 testdb2 testdb3 testdb4.

Node 4:

2014-05-2014: 46: 46.053:

[cssd (3015132)] CRS-1604:CSSD voting file is offline: / dev/remc0_04a5; details at (: CSSNM00069:) in / oracle/app/11.2.0/grid/log/testdb4/cssd/ocssd.log.

2014-05-2014: 46: 46.053:

[cssd (3015132)] CRS-1626:A Configuration change request completed successfully

2014-05-2014: 4615 46.073:

[cssd (3015132)] CRS-1601:CSSD Reconfiguration complete. Active nodes are testdb1 testdb2 testdb3 testdb4.

CRS status is normal:

Testdb3:/oracle/app/11.2.0/grid/log/testdb3/cssd (testdb3) $/ oracle/app/11.2.0/grid/bin/crsctl stat res-t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

Ora.DATADG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.LISTENER.lsnr

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.OCRVOTE.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.SYSDG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.asm

ONLINE ONLINE testdb1 Started

ONLINE ONLINE testdb2 Started

ONLINE ONLINE testdb3 Started

ONLINE ONLINE testdb4 Started

Ora.gsd

OFFLINE OFFLINE testdb1

OFFLINE OFFLINE testdb2

OFFLINE OFFLINE testdb3

OFFLINE OFFLINE testdb4

Ora.net1.network

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.ons

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.registry.acfs

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Cluster Resources

Ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE testdb4

Ora.cvu

1 ONLINE ONLINE testdb3

Ora.oc4j

1 ONLINE ONLINE testdb3

Ora.scan1.vip

1 ONLINE ONLINE testdb4

Ora.testdb.db

1 ONLINE ONLINE testdb1 Open

2 ONLINE ONLINE testdb2 Open

3 ONLINE ONLINE testdb3 Open

4 ONLINE ONLINE testdb4 Open

Ora.testdb1.vip

1 ONLINE ONLINE testdb1

Ora.testdb2.vip

1 ONLINE ONLINE testdb2

Ora.testdb3.vip

1 ONLINE ONLINE testdb3

Ora.testdb4.vip

1 ONLINE ONLINE testdb4

View the votedisk as follows:

$/ oracle/app/11.2.0/grid/bin/crsctl query css votedisk

# # STATE File Universal Id File Name Disk group

1. ONLINE 8a31ddf5013d4fb1bfdbb01d6fc6eb7b (/ dev/rhitachi_v0_11cc) [OCRVOTE]

2. ONLINE 1ef9486d54b24f8cbf07814d2848a009 (/ voting_disk/vote_disk_nfs) [OCRVOTE]

Located 2 voting disk (s).

When the storage fiber is plugged back into the online disk manually, the storage on both sides will automatically synchronize the data.

Alter diskgroup SYSDG online disks in failgroup fail_1

Alter diskgroup DATADG online disks in failgroup fail_1

Test result

All EMC storage is automatically OFFLINE in each node ASM disk group, and HDS storage is reserved, and each node instance is normal. In the test, we unplugged the hds storage fiber, which is the same as unplugging the EMS storage fiber. As a result, it can be concluded that when one side of the storage is down, the storage retained by ASM EXTEND RAC on the other side is normal, and all the node instances are normal. When the storage fiber is plugged back into the online disk manually, both sides of the storage will automatically synchronize the data.

Note: the disk group that holds the votedisk will automatically online the disk after the disk is hung back.

4.2 reboot Node 1 and 2 hosts, simulating the sudden outage of the host

When reboot nodes 1 and 2 hosts, view the crs resource status as follows:

$crsctl stat res-t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

Ora.ARCHDG.dg

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.DATADG.dg

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.LISTENER.lsnr

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.OCRVOTE.dg

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.SYSDG.dg

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.asm

ONLINE ONLINE testdb3 Started

ONLINE ONLINE testdb4 Started

Ora.gsd

OFFLINE OFFLINE testdb3

OFFLINE OFFLINE testdb4

Ora.net1.network

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.ons

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.registry.acfs

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Cluster Resources

Ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE testdb3

Ora.cvu

1 ONLINE ONLINE testdb3

Ora.oc4j

1 ONLINE ONLINE testdb3

Ora.scan1.vip

1 ONLINE ONLINE testdb3

Ora.testdb.db

1 ONLINE OFFLINE

2 ONLINE OFFLINE

3 ONLINE ONLINE testdb3 Open

4 ONLINE ONLINE testdb4 Open

Ora.testdb1.vip

1 ONLINE INTERMEDIATE testdb4 FAILED OVER

Ora.testdb2.vip

1 ONLINE INTERMEDIATE testdb3 FAILED OVER

Ora.testdb3.vip

1 ONLINE ONLINE testdb3

Ora.testdb4.vip

1 ONLINE ONLINE testdb4

When the hosts of nodes 1 and 2 get up, check the CRS status as follows:

$crsctl stat res-t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

Ora.DATADG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.LISTENER.lsnr

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.OCRVOTE.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.SYSDG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.asm

ONLINE ONLINE testdb1 Started

ONLINE ONLINE testdb2 Started

ONLINE ONLINE testdb3 Started

ONLINE ONLINE testdb4 Started

Ora.gsd

OFFLINE OFFLINE testdb1

OFFLINE OFFLINE testdb2

OFFLINE OFFLINE testdb3

OFFLINE OFFLINE testdb4

Ora.net1.network

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.ons

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.registry.acfs

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Cluster Resources

Ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE testdb3

Ora.cvu

1 ONLINE ONLINE testdb3

Ora.oc4j

1 ONLINE ONLINE testdb4

Ora.scan1.vip

1 ONLINE ONLINE testdb3

Ora.testdb.db

1 ONLINE ONLINE testdb1 Open

2 ONLINE ONLINE testdb2 Open

3 ONLINE ONLINE testdb3 Open

4 ONLINE ONLINE testdb4 Open

Ora.testdb1.vip

1 ONLINE ONLINE testdb1

Ora.testdb2.vip

1 ONLINE ONLINE testdb2

Ora.testdb3.vip

1 ONLINE ONLINE testdb3

Ora.testdb4.vip

1 ONLINE ONLINE testdb4

Test result

When one or more nodes are down, the VIP will float to the normal node, and all clients will reconnect to the available node. When the test host restarts, the CRS will automatically pull up, and the VIP will float back normally.

4.3 simulated public network outage

The network cable cannot be unplugged because the host is virtualized. Use the command ifconfig en1 down to crash the network card where Node 1 public ip is located to test

1) check Node 1 and find that the public IP, VIP and SCAN IP are all on the network card en1.

Root@testdb1:/#netstat-in

Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll

En1 1500 link#2 0.14.5e.79.5c.ca 5153732 0 4066346 2 0

En1 1500 100.15.64 100.15.64.180 5153732 0 4066346 2 0

En1 1500 100.15.64 100.15.64.184 5153732 0 4066346 2 0

En1 1500 100.15.64 100.15.64.188 5153732 0 4066346 2 0

En2 1500 link#3 0.14.5e.79.5b.e6 40305463 0 44224443 2 0

En2 1500 7.154.64 7.154.64.1 40305463 0 44224443 2 0

En2 1500 169.254 169.254.78.30 40305463 0 44224443 2 0

Lo0 16896 link#1 2316784 0 2316787 0 0

Lo0 16896 127 127.0.0.1 2316784 0 2316787 0 0

Lo0 16896:: 1% 2316784 0 2316787 0

2) use the command ifconfig en1 down to test

Root@testdb1:/oracle/app/11.2.0/grid/bin#ifconfig en1 down

3) check the status of crs resources and find that vip,scan ip has floated to normal nodes.

Testdb3:/home/oracle (testdb3) $crsctl stat res-t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

Ora.DATADG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.LISTENER.lsnr

ONLINE OFFLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.OCRVOTE.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.SYSDG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.asm

ONLINE ONLINE testdb1 Started

ONLINE ONLINE testdb2 Started

ONLINE ONLINE testdb3 Started

ONLINE ONLINE testdb4 Started

Ora.gsd

OFFLINE OFFLINE testdb1

OFFLINE OFFLINE testdb2

OFFLINE OFFLINE testdb3

OFFLINE OFFLINE testdb4

Ora.net1.network

ONLINE OFFLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.ons

ONLINE OFFLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.registry.acfs

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Cluster Resources

Ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE testdb2

Ora.cvu

1 ONLINE ONLINE testdb2

Ora.oc4j

1 ONLINE ONLINE testdb4

Ora.scan1.vip

1 ONLINE ONLINE testdb2

Ora.testdb.db

1 ONLINE ONLINE testdb1 Open

2 ONLINE ONLINE testdb2 Open

3 ONLINE ONLINE testdb3 Open

4 ONLINE ONLINE testdb4 Open

Ora.testdb1.vip

1 ONLINE INTERMEDIATE testdb4 FAILED OVER

Ora.testdb2.vip

1 ONLINE ONLINE testdb2

Ora.testdb3.vip

1 ONLINE ONLINE testdb3

Ora.testdb4.vip

1 ONLINE ONLINE testdb4

4) activate the en1 network card of node 1

Root@testdb1:/#ifconfig en1 up

5) check the status of crs resources and find that vip floats normally

Testdb3:/home/oracle (testdb3) $crsctl stat res-t

NAME TARGET STATE SERVER STATE_DETAILS

Local Resources

Ora.DATADG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.LISTENER.lsnr

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.OCRVOTE.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.SYSDG.dg

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.asm

ONLINE ONLINE testdb1 Started

ONLINE ONLINE testdb2 Started

ONLINE ONLINE testdb3 Started

ONLINE ONLINE testdb4 Started

Ora.gsd

OFFLINE OFFLINE testdb1

OFFLINE OFFLINE testdb2

OFFLINE OFFLINE testdb3

OFFLINE OFFLINE testdb4

Ora.net1.network

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.ons

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Ora.registry.acfs

ONLINE ONLINE testdb1

ONLINE ONLINE testdb2

ONLINE ONLINE testdb3

ONLINE ONLINE testdb4

Cluster Resources

Ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE testdb2

Ora.cvu

1 ONLINE ONLINE testdb2

Ora.oc4j

1 ONLINE ONLINE testdb4

Ora.scan1.vip

1 ONLINE ONLINE testdb2

Ora.testdb.db

1 ONLINE ONLINE testdb1 Open

2 ONLINE ONLINE testdb2 Open

3 ONLINE ONLINE testdb3 Open

4 ONLINE ONLINE testdb4 Open

Ora.testdb1.vip

1 ONLINE ONLINE testdb1

Ora.testdb2.vip

1 ONLINE ONLINE testdb2

Ora.testdb3.vip

1 ONLINE ONLINE testdb3

Ora.testdb4.vip

1 ONLINE ONLINE testdb4

Test result

The listening of the test node (node 1) stops. SCAN LISTENER was originally running on this node and has drifted to other available nodes. The VIP of the test node has drifted to other available nodes. When the network card is up (the public network returns to normal), the VIP floats normally, and the test node listens automatically online,SCAN LISTENER and scan VIP does not float back. Then we test the network card where the public IP is located when the other nodes are down, and find that SCAN LISTENER drifts to the smallest node of instance_number, while vip drifts randomly.

4.4 Down the monitoring test

Implemented through the kill listener process

Test result

The original connection is not affected, the new connection cannot be connected to the node instance, and the application is reconnected to another node through TAF or automatically.

The listening process automatically restarts

4.5 Database single instance crash test

Implemented through the kill pmon process

Test result

After the kill pmon process, the database instance crash, and the instance automatically restarts, and the session reconnects automatically after the restart is completed.

4.6Simulator CSSD process crash

Implemented through the kill cssd process

Test result

After the kill cssd process, the node is restarted, and the VIP floats to other normal nodes. After the host is started, the CRS is automatically pulled up, and the cluster is reconfigured.

4.7 simulate CRSD process crash

Implemented through the kill crsd process

Test result

After the kill crsd.bin process, the process is automatically pulled up within a minute. Principle: the crsd process crash will be detected by orarootagent, and the crsd process will be automatically restarted.

4.8Simulator EVMD process crash

Implemented through the kill evmd process

Test result

After the kill evmd.bin process, the process is automatically pulled up within a minute. Principle: evmd process crash will be detected by ohasd process, evmd, orarootagent and crsd processes will be restarted

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report