In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Oracle RAC replacement Storage Migration data
We use the ASM rebalance feature to realize the requirement of replacing, storing and migrating data.
Basic zero downtime (summary of operation steps)
1) ensure availability between the new storage and the current RAC node
2) the new storage partition LUN allows you to replan the storage scheme
3) migrate OCR and voting disk
4) add ASM disks (allocated by new storage) to the existing ASM disk group, making full use of ASM REBALANCE technology
5) Delete the original stored ASM disk
6) observation period
1. Current storage information
The following ASM disk group, OCR, VOTE information:
ASM disk groups:
ASMCMD > lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 1048576 3071982 3071091 298 1535396 0 N BACK/
MOUNTED NORMAL N 512 4096 1048576 4095976 1561759 633568 464095 0 N DATA/
MOUNTED NORMAL N 512 4096 1048576 102396 101470 326 50572 0 N OCR/
Currently, ASM has three BACK,DATA,OCR disk groups with a total size of 7TB. Disk groups mainly store data files, archive log files and COR files. The following is the disk information of each disk group:
SQL > select NAME,PATH,total_mb,free_mb from v$asm_disk
NAME PATH TOTAL_MB FREE_MB
-
BACK_VOL1 ORCL:BACK_VOL1 1023994 390436
DATA_VOL1 ORCL:DATA_VOL1 1023994 390450
DATA_VOL2 ORCL:DATA_VOL2 1023994 390447
DATA_VOL3 ORCL:DATA_VOL3 1023994 390426
DATA_VOL4 ORCL:DATA_VOL4 1023994 1023697
DATA_VOL5 ORCL:DATA_VOL5 1023994 1023698
DATA_VOL6 ORCL:DATA_VOL6 1023994 1023696
OCR_VOL1 ORCL:OCR_VOL1 31376 31075
OCR_VOL2 ORCL:OCR_VOL2 31376 31077
OCR_VOL3 ORCL:OCR_VOL3 39644 39318
10 rows selected.
OCR&VOTE Information:
[grid@oracle1 bin] $. / ocrcheck
Status of Oracle Cluster Registry is as follows:
Version: 3
Total space (kbytes): 262120
Used space (kbytes): 2720
Available space (kbytes): 259400
ID: 2006438789
Device/File Name: + OCR
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Since both OCR and VOTEDISK are on the same storage as ASM, OCR and VOTEDISK also need to be migrated to the new storage.
2 New storage disk partition
Requirements: (operated by storage engineer)
2.1. Shared storage, where both servers can see the disk space allocated by the new storage.
2.2. As before, the size and number of all storage partitions in the ASM disk group remain the same.
3 partition of disk after partition
[root@oracle1 sbin] # fdisk-l
Disk / dev/cciss/c0d0: 1000.1 GB, 1000171331584 bytes
255 heads, 63 sectors/track, 121597 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/cciss/c0d0p1 * 1 13 104391 83 Linux
/ dev/cciss/c0d0p2 14 121597 976623480 8e Linux LVM
Disk / dev/sda: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sda1 1 130541 1048570551 83 Linux
Disk / dev/sdb: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdb1 1 130541 1048570551 83 Linux
Disk / dev/sdc: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdc1 1 130541 1048570551 83 Linux
Disk / dev/sdd: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdd1 1 130541 1048570551 83 Linux
Disk / dev/sde: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sde1 1 130541 1048570551 83 Linux
Disk / dev/sdf: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdf1 1 130541 1048570551 83 Linux
Disk / dev/sdg: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdg1 1 130541 1048570551 83 Linux
Disk / dev/sdh: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdh2 1 4000 32129968 + 83 Linux
/ dev/sdh3 4001 8000 32130000 83 Linux
/ dev/sdh4 8001 13054 40596255 83 Linux
WARNING: The size of this disk is2.9 TB (2919504019456 bytes).
DOS partition table format can not be used on drives for volumes
Larger than 2.2 TB (2199023255040 bytes). Use parted (1) and GUID
Partition table format (GPT).
Disk / dev/sdi: 2919.5 GB, 2919504019456 bytes
255 heads, 63 sectors/track, 354942 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdi1 1 130000 104422468 + 83 Linux
/ dev/sdi2 130001 267349 1103255842 + 83 Linux
Disk / dev/sdj: 1073.7 GB, 1073741824000 bytes-Singapore disk
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdj doesn't contain a valid partition table
Disk / dev/sdk: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdk doesn't contain a valid partition table
Disk / dev/sdl: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdl doesn't contain a valid partition table
Disk / dev/sdm: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdm doesn't contain a valid partition table
Disk / dev/sdn: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdn doesn't contain a validpartition table
Disk / dev/sdo: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdo doesn't conain a valid partition table
Disk / dev/sdp: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdp doesn't contain a valid partition table
Disk / dev/sdq: 107.3 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdq doesn't contain a valid partition table
Disk / dev/sdr: 2919.5 GB, 2919504019456 bytes
255 heads, 63 sectors/track, 354942 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk / dev/sdr doesn't contain a valid partition table
Disk / dev/sds: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sds1 1 39162 314568733 + 8e Linux LVM
Disk / dev/sdt: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/ dev/sdt1 1 39162 314568733 + 8e Linux LVM
4 configure ASM new disk
/ etc/init.d/oracleasm createdisk DATA_VOL01 / dev/sdj1
/ etc/init.d/oracleasm createdisk DATA_VOL02 / dev/sdk1
/ etc/init.d/oracleasm createdisk DATA_VOL03 / dev/sdl1
/ etc/init.d/oracleasm createdisk DATA_VOL04 / dev/sdm1
/ etc/init.d/oracleasm createdisk DATA_VOL05 / dev/sdn1
/ etc/init.d/oracleasm createdisk DATA_VOL06 / dev/sdo1
/ etc/init.d/oracleasm createdisk BACK_VOL01 / dev/sdp1
/ etc/init.d/oracleasm createdisk OCR_VOL4 / dev/sdq1
/ etc/init.d/oracleasm createdisk OCR_VOL5 / dev/sdq2
/ etc/init.d/oracleasm createdisk OCR_VOL6 / dev/sdq3
5 create a new OCRNEW disk group
Su-grid
Sqlplus / as sysasm
CREATE DISKGROUP OCRNEW NORMAL REDUNDANCY
DISK 'ORCL:OCR_VOL4' NAME VOL4
DISK 'ORCL:OCR_VOL5' NAME VOL5
DISK 'ORCL:OCR_VOL6' NAME VOL6 ATTRIBUTE' compatible.asm'='11.2'
6 add OCR information to OCRNEW
[root@oracle1 bin] #. / ocrconfig-add + OCRNEW
[root@oracle1 bin] #. / ocrcheck-config
Oracle Cluster Registry configuration is:
Device/File Name: + OCR
Device/File Name: + OCRNEW
[root@oracle1 bin] # more / etc/oracle/ocr.loc
# Device/file getting replaced by device + OCRNEW
Ocrconfig_loc=+OCR
Ocrmirrorconfig_loc=+OCRNEW
Local_only=false
[root@oracle1 bin] #
You can see that the OCRNEW disk group has been successfully added to the OCR disk information
Migrate vote files
Current votedisk information
[grid@oracle1 ~] $crsctl query css votedisk
# # STATE File Universal Id File Name Disk group
1. ONLINE 14f694d9d4414f9ebf85d3ce6b9aef0b (ORCL:OCR_VOL1) [OCR]
2. ONLINE 9f9ee7281c954f8abfcc6e88c33257ac (ORCL:OCR_VOL2) [OCR]
3. ONLINE 38114fd602194fa9bf4d05655b3d89b7 (ORCL:OCR_VOL3) [OCR]
Located 3 voting disk (s).
[grid@oracle1 ~] $crsctl replace votedisk + OCRNEW
Successful addition of voting disk 00634ef593ee4f92bf48e8c089cb5565.
Successful addition of voting disk 232159722de04f67bf03a78b757e3bec.
Successful addition of voting disk a340d5b23aac4f6fbf9f7b1d59088fa5.
Successful deletion of voting disk 14f694d9d4414f9ebf85d3ce6b9aef0b.
Successful deletion of voting disk 9f9ee7281c954f8abfcc6e88c33257ac.
Successful deletion of voting disk 38114fd602194fa9bf4d05655b3d89b7.
Successfully replaced voting disk group with + OCRNEW.
CRS-4266: Voting file (s) successfully replaced
7 create ASM instance spfile to OCR_NEW
Create an ASM instance spfile to the newly created OCR_NEW ASM disk group (execute on a node where the grid user logs in to the ASM instance)
SQL > create pfile='/home/grid/asmpfile.ora' from spfile
File created.
SQL > create spfile='+OCRNEW' from pfile='/home/grid/asmpfile.ora'
File created.
8 Delete ASM disk group OCR
[root@oracle1 bin] #. / ocrconfig-delete + OCR
View new status and location of OCR and VOTE
[root@oracle1 bin] #. / ocrcheck &. / crsctl query css votedisk
Status of Oracle Cluster Registry is as follows:
Version: 3
Total space (kbytes): 262120
Used space (kbytes): 2768
Available space (kbytes): 259352
ID: 2006438789
Device/File Name: + OCRNEW
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
# # STATE File Universal Id File Name Disk group
1. ONLINE 00634ef593ee4f92bf48e8c089cb5565 (ORCL:OCR_VOL4) [OCRNEW]
2. ONLINE 232159722de04f67bf03a78b757e3bec (ORCL:OCR_VOL5) [OCRNEW]
3. ONLINE a340d5b23aac4f6fbf9f7b1d59088fa5 (ORCL:OCR_VOL6) [OCRNEW]
Located 3 voting disk (s).
SYS@+ASM1 > alter diskgroup OCR dismount
Diskgroup altered.
SYS@+ASM2 > drop diskgroup OCR including contents
Diskgroup dropped.
SYS@+ASM2 > SQL > select GROUP_NUMBER,NAME,STATE,type,TOTAL_MB,free_mb,VOTING_FILES,COMPATIBILITY from v$asm_diskgroup
GROUP_NUMBER NAME STATE TYPE TOTAL_MB FREE_MB V COMPATIBILITY
- -
1 BACK MOUNTED NORMAL 3071982 3070675 N 11.2.0.0.0
2 DATA MOUNTED NORMAL 4095976 1561759 N 11.2.0.0.0
3 OCRNEW MOUNTED NORMAL 102396 101470 N 11.2.0.0.0
SYS@+ASM2 > SQL > select GROUP_NUMBER,DISK_NUMBER,STATE,REDUNDANCY,TOTAL_MB,FREE_MB,name,path,failgroup from v$asm_disk order by GROUP_NUMBER
GROUP_NUMBER DISK_NUMBER STATE REDUNDA TOTAL_MB FREE_MB NAME PATH FAILGROUP
-
0 0 NORMAL UNKNOWN 0 0 ORCL:OCR_VOL1
0 1 NORMAL UNKNOWN 0 0 ORCL:OCR_VOL2
0 2 NORMAL UNKNOWN 0 0 ORCL:OCR_VOL3
1 1 NORMAL UNKNOWN 1023994 1023559 DATA_VOL5 ORCL:DATA_VOL5 DATA_VOL5
10 NORMAL UNKNOWN 1023994 1023559 DATA_VOL4 ORCL:DATA_VOL4 DATA_VOL4
1 2 NORMAL UNKNOWN 1023994 1023557 DATA_VOL6 ORCL:DATA_VOL6 DATA_VOL6
2 2 NORMAL UNKNOWN 1023994 390447 DATA_VOL2 ORCL:DATA_VOL2 DATA_VOL2
2 1 NORMAL UNKNOWN 1023994 390450 DATA_VOL1 ORCL:DATA_VOL1 DATA_VOL1
2 0 NORMAL UNKNOWN 1023994 390436 BACK_VOL1 ORCL:BACK_VOL1 BACK_VOL1
23 NORMAL UNKNOWN 1023994 390426 DATA_VOL3 ORCL:DATA_VOL3 DATA_VOL3
3 0 NORMAL UNKNOWN 31376 31075 VOL4 ORCL:OCR_VOL4 VOL4
GROUP_NUMBER DISK_NUMBER STATE REDUNDA TOTAL_MB FREE_MB NAME PATH FAILGROUP
-
31 NORMAL UNKNOWN 31376 31077 VOL5 ORCL:OCR_VOL5 VOL5
3 2 NORMAL UNKNOWN 39644 39318 VOL6 ORCL:OCR_VOL6 VOL6
13 rows selected.
This is the end of the entire OCR&VOTING migration process
9. You can restart the cluster crs to test whether the OCR&VOTE has been migrated successfully. Of course, you can choose not to restart it. It is recommended to restart it.
View OCR&VOTE location and ASM instance spfile location
[root@oracle1 bin] #. / ocrcheck &. / crsctl query css votedisk
Status of Oracle Cluster Registry is as follows:
Version: 3
Total space (kbytes): 262120
Used space (kbytes): 2768
Available space (kbytes): 259352
ID: 2006438789
Device/File Name: + OCRNEW
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
# # STATE File Universal Id File Name Disk group
1. ONLINE 00634ef593ee4f92bf48e8c089cb5565 (ORCL:OCR_VOL4) [OCRNEW]
2. ONLINE 232159722de04f67bf03a78b757e3bec (ORCL:OCR_VOL5) [OCRNEW]
3. ONLINE a340d5b23aac4f6fbf9f7b1d59088fa5 (ORCL:OCR_VOL6) [OCRNEW]
Located 3 voting disk (s).
SQL > show parameter spfile
NAME TYPE VALUE
-
Spfile string + OCRNEW/oracle-cluster/asmpara
Meterfile/registry.253.8456918
eighty-seven
SQL >
10 migrate data disk group data
SQL > alter diskgroup DATA add disk 'ORCL:DATA_VOL01' rebalance power 11
Diskgroup altered.
SQL > alter diskgroup DATA add disk 'ORCL:DATA_VOL02' rebalance power 11
Diskgroup altered.
SQL > alter diskgroup DATA add disk 'ORCL:DATA_VOL03' rebalance power 11
Diskgroup altered.
SQL > alter diskgroup DATA add disk 'ORCL:DATA_VOL04' rebalance power 11
Diskgroup altered.
SQL > alter diskgroup back add disk 'ORCL:DATA_VOL05' rebalance power 11
Diskgroup altered.
SQL > alter diskgroup back add disk 'ORCL:DATA_VOL06' rebalance power 11
SQL > alter diskgroup BACK add disk 'ORCL:BACK_VOL01' rebalance power 11
Diskgroup altered.
Because the specified rebalance power 11 ASM ASM automatically balances the distribution of the data stored in the ASM disk group DATA on each ASM disk.
When the rebalance is finished, querying the V$ASM_OPERATION view will not return information.
SQL > select * from V$ASM_OPERATION
No rows selected
11 delete old disks on the data disk group
Alter diskgroup data drop disk 'BACK_VOL1' rebalance power 11
Alter diskgroup data drop disk 'DATA_VOL2' rebalance power 11
Alter diskgroup data drop disk 'DATA_VOL3' rebalance power 11
Alter diskgroup back drop disk 'DATA_VOL4' rebalance power 11
Alter diskgroup back drop disk 'DATA_VOL5' rebalance power 11
Alter diskgroup back drop disk 'DATA_VOL6' rebalance power 11
ASM not only does rebalance when adding a new disk to the disk group, but also rebalance when deleting the ASM disk to rebalance the data on that disk to other disks of the disk group.
After deleting the ASM disk in this way, all data from ASM has been stored on the new storage.
09:40:38 SQL > select a.NAME GROUP_NAME,a.TOTAL_MB,a.FREE_MB GROUP_FREE_MB,b.OS_MB,b.FREE_MB,b.name,b.path from v$asm_diskgroup a minute vastly installed disk b where a.GROUP_NUMBER=b.GROUP_NUMBER
GROUP_NAME TOTAL_MB GROUP_FREE_MB OS_MB FREE_MB NAME PATH
- -
BACK 3071982 3070868 1023994 1023622 DATA_VOL05 ORCL:DATA_VOL05
BACK 3071982 3070868 1023994 1023624 DATA_VOL06 ORCL:DATA_VOL06
OCRNEW 102396 101470 31376 31075 VOL4 ORCL:OCR_VOL4
OCRNEW 102396 101470 31376 31077 VOL5 ORCL:OCR_VOL5
OCRNEW 102396 101470 39644 39318 VOL6 ORCL:OCR_VOL6
DATA 4095976 1561759 1023994 390437 DATA_VOL01 ORCL:DATA_VOL01
DATA 4095976 1561759 1023994 390440 DATA_VOL02 ORCL:DATA_VOL02
DATA 4095976 1561759 1023994 390443 DATA_VOL03 ORCL:DATA_VOL03
DATA 4095976 1561759 1023994 390439 DATA_VOL04 ORCL:DATA_VOL04
BACK 3071982 3070868 1023994 1023622 BACK_VOL01 ORCL:BACK_VOL01
12 delete the configuration information of the old ASM disk
[root@oracle1 bin] # oracleasm listdisks
BACK_VOL01
BACK_VOL1
DATA_VOL01
DATA_VOL02
DATA_VOL03
DATA_VOL04
DATA_VOL05
DATA_VOL06
DATA_VOL1
DATA_VOL2
DATA_VOL3
DATA_VOL4
DATA_VOL5
DATA_VOL6
OCR_VOL4
OCR_VOL5
OCR_VOL6
Oracleasm deletedisk DAA_VOL1
Oracleasm deletedisk DATA_VOL2
Oracleasm deletedisk DATA_VOL3
Oracleasm deletedisk DATA_VOL4
Oracleasm deletedisk DATA_VOL5
Oracleasm deletedisk DATA_VOL6
Oracleasm deletedisk BACK_VOL1
Oracleasm deletedisk OCR_VOL1
Oracleasm deletedisk OCR_VOL2
Oracleasm deletedisk OCR_VOL3
[root@oracle2 bin] # oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Cleaning disk "BACK_VOL1"
Cleaning disk "DATA_VOL1"
Cleaning disk "DATA_VOL2"
Cleaning disk "DATA_VOL3"
Cleaning disk "DATA_VOL4"
Cleaning disk "DATA_VOL5"
Cleaning disk "DATA_VOL6"
Scanning system for ASM disks...
You have new mail in / var/spool/mail/root
[root@oracle2 bin] # oracleasm listdisks
BACK_VOL01
DATA_VOL01
DATA_VOL02
DATA_VOL03
DATA_VOL04
DATA_VOL05
DATA_VOL06
OCR_VOL4
OCR_VOL5
OCR_VOL6
At this point, the entire old storage data has been migrated to the new storage.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.