Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Migrate the database from a failed ASM disk group to a new disk group

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

A set of 10G RAC on RHEL lost a disk six months ago. The original system is made of 3 150G disks with NORMAL redundancy. During the inspection a few days ago, it was found that the disk was missing (200 words omitted here). In this environment, both ASMLIB and RAW are matched. 8 disks, 64 paths. At present, the capacity of the database is 200 gigabytes. The information you can see in asmcmd. The total space of 2 disk groups is 300G, and the available space is 3G. At present, there is only 300G of free space for storage. To be on the safe side, migrate the databases in the original problem disk group directly to the new 300G externally redundant disk group.

1 description of environment information 1.1 hardware environment

Operating system

RHEL 5.5 X86_64

Mainframe

HP Blade

Storage

HP ESV400

1.2 Database software

Cluster software version

Oracle Clusterware 10.2.0.1 x86_64

Database version

Oracle Database 10.2.0.1 Enterprise Edition x86room64

Database patch

10.2.0.5 x86_64

1.3 storing information

Multipath device

Capacity (GB)

Use

Bare device information

Ocr1

one

OCR disk

Raw1

Ocr2

one

OCR mirrored disk

Raw2

Votingdisk1

one

Voting disk

Raw3

Votingdisk2

one

Voting disk Mirror 1

Raw4

Votingdisk3

one

Voting disk Mirror 2

Raw5

Asmdisk1

one hundred and fifty

ASM data disk

Raw6

Asmdisk2

one hundred and fifty

ASM data disk

Raw7

Asmdisk3

one hundred and fifty

ASM data disk

Raw8

MOUNT_S

HEADER_STATU

MODE_ST

STATE

TOTAL

FREE

NAME

FAILGROUP

PATH

CLOSED

MEMBER

ONLINE

NORMAL

153597

0

/ dev/raw/raw7

CLOSED

FOREIGN

ONLINE

NORMAL

1019

0

/ dev/raw/raw5

CLOSED

FOREIGN

ONLINE

NORMAL

1019

0

/ dev/raw/raw4

CLOSED

FOREIGN

ONLINE

NORMAL

1019

0

/ dev/raw/raw3

CLOSED

FOREIGN

ONLINE

NORMAL

1019

0

/ dev/raw/raw2

CLOSED

FOREIGN

ONLINE

NORMAL

1019

0

/ dev/raw/raw1

CACHED

MEMBER

ONLINE

NORMAL

153597

3229

BHPSDB_ASM_0000

BHPSDB_ASM_0000

/ dev/raw/raw6

MISSING

UNKNOWN

OFFLINE

HUNG

153597

13761

BHPSDB_ASM_0001

BHPSDB_ASM_0001

CACHED

MEMBER

ONLINE

NORMAL

153597

3230

BHPSDB_ASM_0002

BHPSDB_ASM_0002

/ dev/raw/raw8

/ dev/raw/raw7 is a failed disk and the status of the disk is already MISSING.

1.4 Migration Planning

Because the original disk group uses NORMAL redundancy. There is only 3G space left. The database space is 200g. There is not enough space for the ASM disk group to hold the database. And the new disk has only a 300g disk. To avoid disk REBALANCE exceptions. Therefore, the 300G disk is made into EXTERNAL redundancy mode, and the database is migrated to the new disk group by the way of RMAN COPY. After the migration is successful, delete the original 3 150G disks. And create a new EXTERNAL disk group spare.

2 database backup

This operation is done by the backup manufacturer.

3 ASM identify disk 3.1 stop cluster environment

The original environment uses multipathing, bare device mapping, and ASMLIB configuration. ASM needs to configure multi-path and bare device information when adding disks. For security, stop the database and cluster environment for operation.

[oracle@bhpsdb1~] $crs_stat-t

[oracle@bhpsdb1~] $srvctl stop database-d bhspdb

[root@bhpsdb1 ~] # crsctl stop crs

3.2 operating system recognizes disk

Because there are many disk paths in the system, so many disk devices are identified. But the original disks are all partitioned. A disk that therefore does not contain partition information. Is the newly added disk.

[root@bhpsdb1 ~] # ls-la / dev/sd* | grep-v'[0-9] $'

3.3 configure disk multipathing (all nodes)

The WWID information of the new disk can be obtained from / scsi_id-gus / block/sdnew or ls-la / dev/disk/by-id.

[root@bhpsdb1 ~] # scsi_id-gus / block/sdf

[root@bhpsdb1 ~] # ls-la / dev/disk/by-id

Add red part of content.

[root@bhpsdb1 ~] # vi / etc/multipath.conf

Multipaths {

……

Multipath {

Wwid 36001438005deda610000600001040000

Alias asmdisk4

}

}

3.4 restart multipath service (all nodes)

[root@bhpsdb1 ~] # service multipathd restart

3.5 detect multipath device status (all nodes)

Make sure that the new disk asmdisk4 configuration is available.

[root@bhpsdb1 ~] # multipath-ll

Asmdisk4 (36001438005deda610000600000f80000) dm-0 HP,HSV400

[size=300G] [features=1queue_if_no_path] [hwhandler=0] [rw]

\ _ round-robin 0 [prio=200] [active]

\ _ 0VOVULAR 3VOUR 3 sdaa 65VOUR 160 [active] [ready]

\ _ 1VOV 0VOV 1RU 3 sdaq 66RU [active] [ready]

\ _ 1VOV 0VOV 3RU 3 sdbg 67UR 160 [active] [ready]

\ _ 0 ready 0 1 sdk 8 1 0 [active] [ready]

\ _ round-robin 0 [prio=40] [enabled]

Sdai 66:32 [active] [ready]

\ _ 1 ready 0 sday 67:32 [active] [ready]

Sdc 8:32 [active] [ready]

Sds 65:32 [active] [ready]

3.6 formatting multipath Devic

The purpose of formatting the device is to be consistent with the original disk device. After the SD device is formatted, there is a partition information that distinguishes the newly mounted plain sd disk.

A single node performs formatting operations:

[root@bhpsdb1 ~] # ls-la / dev/mapper/*

[root@bhpsdb1 ~] # kpartx / dev/mapper/asmdisk4

Other nodes perform partition device discovery operations:

[root@bhpsdb2 ~] # partprobe

3.7 bare device binding (all nodes)

Edit the bare device profile and add a new disk configuration.

3.7.1 configure bare device binding file

[root@bhpsdb1 ~] # vi / etc/sysconfig/rawdevices

/ dev/raw/raw9 / dev/mapper/asmdisk4p1

3.7.2 restart the bare device service

[root@bhpsdb1 ~] # service rawdevices restart

3.7.3 modify bare device permissions

[root@bhpsdb1 ~] # chown oracle:oinstall / dev/raw/raw9

[root@bhpsdb1 ~] # chmod 644 / dev/raw/raw9

3.7.4 modify the system startup file

Add permissions and group modification scripts for new devices.

[root@bhpsdb1 ~] # vi / etc/rc.local

/ bin/chownoracle:oinstall / dev/raw/raw9

/ bin/chmod644 / dev/raw/raw9

3.7.5 detect new bare devices

The subordinate group of guaranteed bare devices is oracle:oinstall, and the permission is 644.

[root@bhpsdb1 ~] # ls-la / dev/raw/raw*

4 create a new disk group 4.1 check disk status information

[oracle@bhpsdb1 ~] $kfod aisle d=all o=all s=true

4.2 create an ASM disk group

[oracle@bhpsdb1 ~] $export ORACLE_SID=+ASM1

[oracle@bhpsdb1 ~] $sqlplus / as sysdba

SQL > CREATE DISKGROUP CRMDGNEWEXTERNAL REDUNDANCY DISK'/ dev/asm-data3'

4.3 Mount disk groups on other nodes

[oracle@bhpsdb2~] $sqlplus / as sysdba

SQL > alter diskgroup CRMDGNEW mount

4.4 detect disk group information

[oracle@bhpsdb1~] $asmcmd lsdg

[oracle@bhpsdb2~] $asmcmd lsdg

5 migrate database to new disk group 5.1 launch single instance to NOMOUNT

SQL > startup nomount

SQL > create pfile='/home/oracle/crmdb.pfile' from spfile

5.1.1 adjust the parameter file to modify the control file to the new disk group

SQL > set linesize 120

SQL > show parameter control

SQL > alter system setcontrol_files='+CRMDGNEW/crmdb/controlfile/control01.ctl' scope=spfile

5.1.2 restart the instance to NOMOUNT status

SQL > startup force nomount

5.2 restore control files

[oracle@bhpsdb1~] $rman target /

RMAN > RESTORE CONTROLFILE FROM'+ CRMDG/crmdb/controlfile//Current.256.901636495'

5.3 copy the database

MOUNT database instance for database copy operation.

RMAN > ALTER DATABASE MOUNT

RMAN >

RUN

{

ALLOCATE CHANNEL dev1 DEVICE TYPE DISK

ALLOCATE CHANNEL dev2 DEVICE TYPE DISK

ALLOCATE CHANNEL dev3 DEVICE TYPE DISK

ALLOCATE CHANNEL dev4 DEVICE TYPE DISK

BACKUP AS COPY DATABASE FORMAT'+ CRMDGNEW'

}

5.4 switch database to copy

RMAN > SWITCH DATABASE TO COPY

5.5 Open the database

SQL > alter database open

5.6 Open other instances and check

This operation is done in other instances of the database.

SQL > STARTUP

SQL > SELECT INST_ID, STATUS FROM GV$INSTANCE

5.7 create temporary tablespaces to a new disk group

RMAN > REPORT SCHEMA

SQL > ALTER TABLESPACE TEMP ADD TEMPFILE'+ CRMDGNEW' SIZE 1G

SQL > ALTER TABLESPACE TEMP DROP TEMPFILE'+CRMDG/crmdb/tempfile/temp.262.901636511'

5.8 create a redo log to a new disk group 5.8.1 query the current log group information

SQL > SET LINESIZE 120

SQL > COL MEMBER FOR A80

SELECT DISTINCT 'ALTERDATABASE DROP LOGFILE GROUP' | | GROUP# | |'; 'FROM V$LOGFILE

5.8.2 add a new log group

ALTER DATABASE ADDLOGFILE THREAD 1 GROUP 5 ('+ CRMDGNEW') SIZE 100m

ALTER DATABASE ADDLOGFILE THREAD 1 GROUP 6 ('+ CRMDGNEW') SIZE 100m

ALTER DATABASE ADDLOGFILE THREAD 2 GROUP 7 ('+ CRMDGNEW') SIZE 100m

ALTER DATABASE ADDLOGFILE THREAD 2 GROUP 8 ('+ CRMDGNEW') SIZE 100m

5.8.3 Delete an old log group

Before deleting the old log group, perform multiple log switches to ensure that the old log group state is INACTIVE.

SQL > SET LINESIZE 120

SQL > ALTER SYSTEM ARCHIVE LOG CURRENT

SQL > ALTER SYSTEM CHECKPOINT GLOBAL

SQL > SELECT * FROM V$LOG ORDER BY 1

The following scripts are generated by the query log script:

ALTER DATABASE DROPLOGFILE GROUP 1

ALTER DATABASE DROPLOGFILE GROUP 2

ALTER DATABASE DROPLOGFILE GROUP 3

ALTER DATABASE DROPLOGFILE GROUP 4

5.8.4 add an existing log group

ALTER DATABASE ADDLOGFILE THREAD 1 GROUP 1 ('+ CRMDGNEW') SIZE 100m

ALTER DATABASE ADDLOGFILE THREAD 1 GROUP 2 ('+ CRMDGNEW') SIZE 100m

ALTER DATABASE ADDLOGFILE THREAD 2 GROUP 3 ('+ CRMDGNEW') SIZE 100m

ALTER DATABASE ADDLOGFILE THREAD 2 GROUP 4 ('+ CRMDGNEW') SIZE 100m

5.9 modify parameter file 5.9.1 backup SPFILE file

SQL > show parameter spfile

SQL > create pfile='/home/oracle/crmdb.pfile' from spfile

5.9.2 generate a new SPFILE file

Createspfile='+CRMDGNEW/spfilecrmdb.ora' from pfile='/home/oracle/crmdb.pfile'

5.9.3 modify the PFILE file in the dbs directory

[oracle@bhpsdb1~] $cd $ORACLE_HOME/dbs

[oracle@bhpsdb1dbs] $vi initcrmdb1.ora

SPFILE='+CRMDGNEW/spfilecrmdb.ora'

5.9.4 restart database verification

SQL > shutdown immediate

SQL > startup

6. Delete faulty disk

The following operations need to be done in the context of the ASM instance.

6.1DISMOUNT Legacy disk Group

DISMOUNT the CRMDG disk group of other instances.

SQL > alter diskgroup CRMDG dismount

6.2 Delete files under the disk group

ASMCMD > cd CRMDG

ASMCMD > rm-r. / crmdb

6.3 Delete old disk group

SQL > DROP DISKGROUP CRMDG

6.4 check disk status

[oracle@bhpsdb1 ~] $kfod aisle d=all o=all s=true

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report