Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Capacity expansion of ASM disk Group in Oracle RAC Environment

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Capacity expansion of ASM disk Group in Oracle RAC Environment

The production environment should pay attention to adjust the following parameters:

+

+ 1. Scroll between nodes to add UDEV disks

+ 2.ASM memory size

+ 3.POWER_LIMIT, don't make it too big

+

Experimental environment

-- system environment: Centos 6.10

-- Database: Oracle 11g RAC

-- disk binding: udev

Experimental purpose

-- insufficient storage space. You need to add a disk to the disk group DATA

Preparatory work

1. Check the database, asm and grid logs of each node for error messages

two。 Check the service status of each node (switch to grid user)

-- list the database name

[grid@node2 ~] $srvctl config database

RacNode

[grid@node2 ~] $

-- instance status

[grid@node2] $srvctl status database-d RacNode

Instance RacNode1 is running on node node1

Instance RacNode2 is running on node node2

[grid@node2 ~] $

-- Cluster status:

[grid@node2 ~] $crsctl check cluster-all

* * *

Node1:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

* * *

Node2:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

* * *

[grid@node2 ~] $

-- Cluster resource status

[grid@node2] $crsctl status res-t

3.ASM disk group and disk check

[grid@node1 ~] $export ORACLE_SID=+ASM1

[grid@node1 ~] $sqlplus / nolog

SQL > conn / as sysasm

-displays databases that use ASM disk groups

SQL > col INSTANCE_NAME format A20

SQL > col SOFTWARE_VERSION format A20

SQL > select * from gv$asm_client order by 1 and 2

INST_ID GROUP_NUMBER INSTANCE_NAME DB_NAME STATUS SOFTWARE_VERSION COMPATIBLE_VERSION

11 RacNode1 RacNode CONNECTED 11.2.0.4.0 11.2.0.4.0

11 + ASM1 + ASM CONNECTED 11.2.0.4.0 11.2.0.4.0

13 + ASM1 + ASM CONNECTED 11.2.0.4.0 11.2.0.4.0

2 1 RacNode2 RacNode CONNECTED 11.2.0.4.0 11.2.0.4.0

2 1 + ASM2 + ASM CONNECTED 11.2.0.4.0 11.2.0.4.0

2 3 + ASM2 + ASM CONNECTED 11.2.0.4.0 11.2.0.4.0

SQL >

-- Show disk groups

SQL > select group_number,name,state,type,total_mb,free_mb,usable_file_mb,allocation_unit_size/1024/1024 unit_mb from v$asm_diskgroup order by 1

GROUP_NUMBER NAME STATE TYPE TOTAL_MB FREE_MB USABLE_FILE_MB UNIT_MB

-

1 DATA MOUNTED NORMAL 4096 477-273 1

2 FLASH MOUNTED EXTERN 2048 1951 1951 1

3 OCRVOTE MOUNTED NORMAL 3072 2146 561 1

SQL >

-- display disk

SQL > col NAME format A25

SQL > col PATH format A40

SQL > col FAILGROUP format A25

SQL > select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1 Pol 2

GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DA

1 DATA_0000 / dev/asm-data01 CACHED NORMAL UNKNOWN 1024 118 DATA_0000 11-DEC-18

1 DATA_0001 / dev/asm-data02 CACHED NORMAL UNKNOWN 1024 125 DATA_0001 11-DEC-18

1 DATA_0002 / dev/asm-data03 CACHED NORMAL UNKNOWN 1024 115 DATA_0002 11-DEC-18

1 DATA_0003 / dev/asm-data04 CACHED NORMAL UNKNOWN 1024 119 DATA_0003 11-DEC-18

2 FLASH_0000 / dev/asm-arch01 CACHED NORMAL UNKNOWN 1024 975 FLASH_0000 11-DEC-18

2 FLASH_0001 / dev/asm-arch02 CACHED NORMAL UNKNOWN 1024 976 FLASH_0001 11-DEC-18

3 OCRVOTE_0000 / dev/asm-ocrvote01 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0000 11-DEC-18

3 OCRVOTE_0001 / dev/asm-ocrvote02 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0001 11-DEC-18

3 OCRVOTE_0002 / dev/asm-ocrvote03 CACHED NORMAL UNKNOWN 1024 716 OCRVOTE_0002 11-DEC-18

SQL >

4. System-level UDEV information

[root@node2 ~] # cat / etc/udev/rules.d/99-oracle-asmdevices.rules

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB132e6928-d49d18d4", NAME= "asm-ocrvote01", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB58647ee7-b466963a", NAME= "asm-ocrvote02", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB4c771d58-f17105b9", NAME= "asm-ocrvote03", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB88c685cb-3a4633f4", NAME= "asm-data01", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB1b30fd18-af14e003", NAME= "asm-data02", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB67a4423a-e151f28b", NAME= "asm-data03", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB81f04366-170fc910", NAME= "asm-data04", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB9a0f6e52-bcfcb52c", NAME= "asm-arch01", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB1f2bac75-512a46ae", NAME= "asm-arch02", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

[root@node2 ~] #

[root@node2 ~] # ll / dev/asm-*

Brw-rw---- 1 grid asmadmin 8, 16 Dec 12 17:22 / dev/asm-arch01

Brw-rw---- 1 grid asmadmin 8, 32 Dec 12 16:21 / dev/asm-arch02

Brw-rw---- 1 grid asmadmin 8, 48 Dec 12 17:22 / dev/asm-data01

Brw-rw---- 1 grid asmadmin 8, 64 Dec 12 17:22 / dev/asm-data02

Brw-rw---- 1 grid asmadmin 8, 80 Dec 12 17:22 / dev/asm-data03

Brw-rw---- 1 grid asmadmin 8, 96 Dec 12 17:22 / dev/asm-data04

Brw-rw---- 1 grid asmadmin 8, 112 Dec 12 17:22 / dev/asm-ocrvote01

Brw-rw---- 1 grid asmadmin 8, 128 Dec 12 17:22 / dev/asm-ocrvote02

Brw-rw---- 1 grid asmadmin 8, 144 Dec 12 17:22 / dev/asm-ocrvote03

[root@node2 ~] #

Enter the experimental stage

1. Stop the business of the corresponding database

two。 Check the database session and stop listening

-- check the listening status of each node

[grid@node1] $srvctl status listener-n node1

Listener LISTENER is enabled on node (s): node1

Listener LISTENER is running on node (s): node1

[grid@node1] $srvctl status listener-n node2

Listener LISTENER is enabled on node (s): node2

Listener LISTENER is running on node (s): node2

[grid@node1 ~] $

-- prohibit monitoring from self-starting

[grid@node1] $srvctl disable listener-n node1

[grid@node1] $srvctl disable listener-n node2

-- stop monitoring

[grid@node1] $srvctl stop listener-n node1

[grid@node1] $srvctl stop listener-n node2

-- check the monitoring status after stopping and shutting down

[grid@node1] $srvctl status listener-n node1

Listener LISTENER is disabled on node (s): node1

Listener LISTENER is not running on node (s): node1

[grid@node1] $srvctl status listener-n node2

Listener LISTENER is disabled on node (s): node2

Listener LISTENER is not running on node (s): node2

[grid@node1 ~] $

3. Shut down the database

-- check database configuration

[grid@node1] $srvctl config database-d RacNode

Database unique name: RacNode

Database name: RacNode

Oracle home: / u01/app/oracle/11.2.0/dbhome_1

Oracle user: oracle

Spfile: + DATA/RacNode/spfileRacNode.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: RacNode

Database instances: RacNode1,RacNode2

Disk Groups: DATA

Mount point paths:

Services:

Type: RAC

Database is administrator managed

[grid@node1 ~] $

-- disable database self-startup (need to switch root users)

[root@node2 ~] # cd / u01/app/11.2.0/grid/bin

[root@node2 bin] #. / srvctl disable database-d RacNode

[root@node2 bin] #

-- close the database

[grid@node1] $srvctl stop database-d RacNode

[grid@node1 ~] $

-- check the database status after shutdown

[grid@node1] $srvctl status database-d RacNode

Instance RacNode1 is not running on node node1

Instance RacNode2 is not running on node node2

[grid@node1 ~] $

4. Turn off the cluster software

-- check whether each node cluster is self-starting

[root@node1 bin] #. / crsctl config has

CRS-4622: Oracle High Availability Services autostart is enabled.

[root@node1 bin] #

[root@node2 bin] #. / crsctl config has

CRS-4622: Oracle High Availability Services autostart is enabled.

-- prohibit self-startup of each node

[root@node1 bin] #. / crsctl disable has

CRS-4621: Oracle High Availability Services autostart is disabled.

[root@node1 bin] #

[root@node2 bin] #. / crsctl disable has

CRS-4621: Oracle High Availability Services autostart is disabled.

[root@node2 bin] #

-- check whether the prohibition of self-startup on each node takes effect.

[root@node1 bin] #. / crsctl config has

CRS-4621: Oracle High Availability Services autostart is disabled.

[root@node1 bin] #

[root@node2 bin] #. / crsctl config has

CRS-4621: Oracle High Availability Services autostart is disabled.

[root@node2 bin] #

-- stop each node cluster

[root@node1 bin] #. / crsctl stop has

[root@node2 bin] #. / crsctl stop has

5. Add disks at the system level (assisted by storage engineer)

6. Each node uses UDEV to add disks

-- determine the drive letter

Fdisk-l

/ dev/sdk

/ dev/sdl

/ dev/sdm

/ dev/sdn

-- get binding rules

For i in k l m n

Do

Echo "KERNEL==\" sd*\ ", BUS==\" scsi\ ", PROGRAM==\" / sbin/scsi_id-- whitelisted-- replace-whitespace-- device=/dev/\ $name\ ", RESULT==\" `/ sbin/scsi_id-- whitelisted-- replace-whitespace-- device=/dev/sd$ i` ", NAME=\" asm-disk$i\ ", OWNER=\" grid\ ", GROUP=\" asmadmin\ ", MODE=\" 0660\ ""

Done

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB00e4d091-5990307e", NAME= "asm-data05", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB8bc8125c-72dacc92", NAME= "asm-data06", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VBc195c913-00f6c68e", NAME= "asm-data07", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

KERNEL== "sd*", BUS== "scsi", PROGRAM== "/ sbin/scsi_id-whitelisted-replace-whitespace-device=/dev/$name", RESULT== "1ATA_VBOX_HARDDISK_VB720e4cca-32c20936", NAME= "asm-data08", OWNER= "grid", GROUP= "asmadmin", MODE= "0660"

-- each node adds rules to the rule file 99-oracle-asmdevices.rules

-- restart udev

[root@node1 rules.d] # start_udev

[root@node2 rules.d] # start_udev

-- confirm to add the disk to ensure that each node can identify the disk and be consistent

[root@node2 rules.d] # ll / dev/asm-*

Brw-rw---- 1 grid asmadmin 8, 16 Dec 13 16:02 / dev/asm-arch01

Brw-rw---- 1 grid asmadmin 8, 32 Dec 13 16:02 / dev/asm-arch02

Brw-rw---- 1 grid asmadmin 8, 48 Dec 13 16:02 / dev/asm-data01

Brw-rw---- 1 grid asmadmin 8, 64 Dec 13 16:02 / dev/asm-data02

Brw-rw---- 1 grid asmadmin 8, 80 Dec 13 16:02 / dev/asm-data03

Brw-rw---- 1 grid asmadmin 8, 96 Dec 13 16:02 / dev/asm-data04

Brw-rw---- 1 grid asmadmin 8, 160 Dec 13 16:13 / dev/asm-data05

Brw-rw---- 1 grid asmadmin 8, 176 Dec 13 16:13 / dev/asm-data06

Brw-rw---- 1 grid asmadmin 8, 192 Dec 13 16:13 / dev/asm-data07

Brw-rw---- 1 grid asmadmin 8, 208 Dec 13 16:13 / dev/asm-data08

Brw-rw---- 1 grid asmadmin 8, 112 Dec 13 16:02 / dev/asm-ocrvote01

Brw-rw---- 1 grid asmadmin 8, 128 Dec 13 16:02 / dev/asm-ocrvote02

Brw-rw---- 1 grid asmadmin 8, 144 Dec 13 16:02 / dev/asm-ocrvote03

[root@node2 rules.d] #

7. Start the cluster

[root@node1 bin] #. / crsctl start has

CRS-4123: Oracle High Availability Services has been started.

[root@node2 bin] #. / crsctl start has

CRS-4123: Oracle High Availability Services has been started.

[root@node2 bin] #

-- check whether the components of the cluster start up properly

[grid@node2] $crsctl status res-t

At this point, the monitoring and database services are disabled.

8.ASM capacity expansion

-- check if asm recognizes unadded disks

SQL > set line 200

SQL > col NAME format A25

SQL > col PATH format A40

SQL > col FAILGROUP format A25

SQL > select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1 Pol 2

GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DA

0 / dev/asm-data07 CLOSED NORMAL UNKNOWN 0 0

0 / dev/asm-data08 CLOSED NORMAL UNKNOWN 0 0

0 / dev/asm-data06 CLOSED NORMAL UNKNOWN 0 0

0 / dev/asm-data05 CLOSED NORMAL UNKNOWN 0 0

1 DATA_0000 / dev/asm-data01 CACHED NORMAL UNKNOWN 1024 118 DATA_0000 11-DEC-18

1 DATA_0001 / dev/asm-data02 CACHED NORMAL UNKNOWN 1024 125 DATA_0001 11-DEC-18

1 DATA_0002 / dev/asm-data03 CACHED NORMAL UNKNOWN 1024 115 DATA_0002 11-DEC-18

1 DATA_0003 / dev/asm-data04 CACHED NORMAL UNKNOWN 1024 119 DATA_0003 11-DEC-18

2 FLASH_0000 / dev/asm-arch01 CACHED NORMAL UNKNOWN 1024 975 FLASH_0000 11-DEC-18

2 FLASH_0001 / dev/asm-arch02 CACHED NORMAL UNKNOWN 1024 976 FLASH_0001 11-DEC-18

3 OCRVOTE_0000 / dev/asm-ocrvote01 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0000 11-DEC-18

3 OCRVOTE_0001 / dev/asm-ocrvote02 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0001 11-DEC-18

3 OCRVOTE_0002 / dev/asm-ocrvote03 CACHED NORMAL UNKNOWN 1024 716 OCRVOTE_0002 11-DEC-18

SQL >

-- add disks to disk group DATA

SQL > alter diskgroup DATA add disk'/ dev/asm-data05' rebalance power 5

SQL > alter diskgroup DATA add disk'/ dev/asm-data06' rebalance power 5

Note: select a value from 1 to 11 for the rebalance power level; the higher the value, the faster the rebalance speed and the greater the impact on the existing operating system. You need to choose the appropriate level according to the business tradeoff at that time.

-- Monitoring disk group rebalance completion

Select * from v$asm_operation

-- check the result of adding (there are 2 disks not added in this experiment)

SQL > col name format A20

SQL > set line 200

SQL > col name format A20

SQL > col path format A40

SQL > select group_number,name,path,mount_status,state,redundancy,total_mb,free_mb,failgroup,create_date from v$asm_disk order by 1 Pol 2

GROUP_NUMBER NAME PATH MOUNT_S STATE REDUNDA TOTAL_MB FREE_MB FAILGROUP CREATE_DA

-

0 / dev/asm-data08 CLOSED NORMAL UNKNOWN 0 0

0 / dev/asm-data06 CLOSED NORMAL UNKNOWN 0 0

0 / dev/asm-data07 CLOSED NORMAL UNKNOWN 0 0

1 DATA_0000 / dev/asm-data01 CACHED NORMAL UNKNOWN 1024 300 DATA_0000 11-DEC-18

1 DATA_0001 / dev/asm-data02 CACHED NORMAL UNKNOWN 1024 297 DATA_0001 11-DEC-18

1 DATA_0002 / dev/asm-data03 CACHED NORMAL UNKNOWN 1024 298 DATA_0002 11-DEC-18

1 DATA_0003 / dev/asm-data04 CACHED NORMAL UNKNOWN 1024 298 DATA_0003 11-DEC-18

1 DATA_0004 / dev/asm-data05 CACHED NORMAL UNKNOWN 1024 306 DATA_0004 13-DEC-18

2 FLASH_0000 / dev/asm-arch01 CACHED NORMAL UNKNOWN 1024 975 FLASH_0000 11-DEC-18

2 FLASH_0001 / dev/asm-arch02 CACHED NORMAL UNKNOWN 1024 976 FLASH_0001 11-DEC-18

3 OCRVOTE_0000 / dev/asm-ocrvote01 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0000 11-DEC-18

3 OCRVOTE_0001 / dev/asm-ocrvote02 CACHED NORMAL UNKNOWN 1024 715 OCRVOTE_0001 11-DEC-18

3 OCRVOTE_0002 / dev/asm-ocrvote03 CACHED NORMAL UNKNOWN 1024 716 OCRVOTE_0002 11-DEC-18

SQL >

9. Start the database

-- start the database self-startup service (ora.racnode.db), otherwise you cannot start the database with srvctl

[root@node1 bin] #. / srvctl enable database-d RacNode

[root@node1 bin] #

[grid@node2] $srvctl start database-d RacNode

[grid@node2] $srvctl status database-d RacNode

Instance RacNode1 is running on node node1

Instance RacNode2 is running on node node2

[grid@node2 ~] $

10. Initiate monitoring

-- start the listening self-starting service, otherwise you cannot start the database with srvctl

[grid@node2] $srvctl enable listener-n node1

[grid@node2] $srvctl enable listener-n node2

[grid@node2] $srvctl start listener-n node1

[grid@node2] $srvctl start listener-n node2

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report