In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to move OCR, Vote File, ASM SPILE to a new disk group by oracle". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how oracle moves OCR, Vote File, and ASM SPILE to a new disk group."
In the 11GR2 environment, migrate rac's OCR, Vote File, ASM SPILE to the new disk group.
Current disk status:
[root@rac1 ~] # crsctl query css votedisk
# # STATE File Universal Id File Name Disk group
1. ONLINE 0c9ec99614ed4fe4bfdba4cb520dd00e (/ dev/raw/raw1) [OCRVOTING]
Located 1 voting disk (s).
[root@rac1 ~] # ocrcheck
Status of Oracle Cluster Registry is as follows:
Version: 3
Total space (kbytes): 262120
Used space (kbytes): 2544
Available space (kbytes): 259576
ID: 827782161
Device/File Name: + OCRVOTING
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
SQL > set line 300
SQL > col failgroup for A40
SQL > col name for A30
SQL > select DISK_NUMBER,REDUNDANCY,name,FAILGROUP,VOTING_FILE from v$asm_disk
DISK_NUMBER REDUNDANCY NAME FAILGROUP VOT
1 UNKNOWN OCRVOTING_0001 OCRVOTING_0001 N
0 UNKNOWN RAC_DATA_0000 RAC_DATA_0000 N
0 UNKNOWN OCRVOTING_0000 OCRVOTING_0000 Y
2 UNKNOWN OCRVOTING_0002 OCRVOTING_0002 N
SQL > select name,total_mb,free_mb,usable_file_mb from v$asm_diskgroup
NAME TOTAL_MB FREE_MB USABLE_FILE_MB
--
OCRVOTING 2997 2597 2597
RAC_DATA 9993 8077 8077
As you can see, the redundancy of the ocr disk group is external, and it is the bare device used, so let's move the ocr disk group to a disk group with redundancy of normal type.
First of all, the allocation of three disks, because it is their own test environment, do not need to be too large, each disk 1G can.
Create a shared disk
Cmd command:
Vmware-vdiskmanager.exe-c-s 1000Mb-a lsilogic-t 2 F:\ RAC\ sharedisk\ ocrdisk01.vmdk
Vmware-vdiskmanager.exe-c-s 1000Mb-a lsilogic-t 2 F:\ RAC\ sharedisk\ ocrdisk02.vmdk
Vmware-vdiskmanager.exe-c-s 1000Mb-a lsilogic-t 2 F:\ RAC\ sharedisk\ ocrdisk03.vmdk
The two virtual machine configuration files are appended as follows
Scsi1:5.present = "TRUE"
Scsi1:5.mode = "independent-persistent"
Scsi1:5.filename = "F:\ RAC\ sharedisk\ ocrdisk01.vmdk"
Scsi1:5.deviceType = "plainDisk"
Scsi1:6.present = "TRUE"
Scsi1:6.mode = "independent-persistent"
Scsi1:6.filename = "F:\ RAC\ sharedisk\ ocrdisk02.vmdk"
Scsi1:6.deviceType = "plainDisk"
Scsi1:8.present = "TRUE"
Scsi1:8.mode = "independent-persistent"
Scsi1:8.filename = "F:\ RAC\ sharedisk\ ocrdisk03.vmdk"
Scsi1:8.deviceType = "plainDisk"
Note that 7 and 7 cannot be used here. 7 is reserved by the system and is not available.
View the new disk:
[root@rac1 ~] # fdisk-l | grep sd
Disk / dev/sda: 19.3 GB, 19327352832 bytes
/ dev/sda1 * 1 64 512000 83 Linux
/ dev/sda2 64 1306 9972736 8e Linux LVM
/ dev/sda3 1306 1566 2093135 8e Linux LVM
/ dev/sda4 1567 2349 6289447 + 8e Linux LVM
Disk / dev/sdb: 1048 MB, 1048576000 bytes
/ dev/sdb1 1 1000 1023984 83 Linux
Disk / dev/sdc: 1048 MB, 1048576000 bytes
/ dev/sdc1 1 1000 1023984 83 Linux
Disk / dev/sdd: 10.5 GB, 10485760000 bytes
/ dev/sdd1 1 1274 10233373 + 83 Linux
Disk / dev/sde: 1048 MB, 1048576000 bytes
/ dev/sde1 1 1000 1023984 83 Linux
Disk / dev/sdf: 1048 MB, 1048576000 bytes
Disk / dev/sdg: 1048 MB, 1048576000 bytes
Disk / dev/sdh: 1048 MB, 1048576000 bytes
Sdf,sdg,sdh is our new allocation of three disks.
Edit the 60-raw.rules file:
[root@rac1 rules.d] # more 60-raw.rules
# Enter raw device bindings here.
#
# An example would be:
# ACTION== "add", KERNEL== "sda", RUN+= "/ bin/raw / dev/raw/raw1% N"
# to bind / dev/raw/raw1 to / dev/sda, or
# ACTION== "add", ENV {MAJOR} = = "8", ENV {MINOR} = = "1", RUN+= "/ bin/raw / dev/raw/raw2% M% m"
# to bind / dev/raw/raw2 to the device with major 8, minor 1.
ACTION== "add", KERNEL== "sdb1", RUN+= "/ bin/raw / dev/raw/raw1 N"
ACTION== "add", KERNEL== "sdc1", RUN+= "/ bin/raw / dev/raw/raw2 N"
ACTION== "add", KERNEL== "sdd1", RUN+= "/ bin/raw / dev/raw/raw3 N"
ACTION== "add", KERNEL== "sde1", RUN+= "/ bin/raw / dev/raw/raw4 N"
ACTION== "add", KERNEL== "sdf", RUN+= "/ bin/raw / dev/raw/raw5 N"
ACTION== "add", KERNEL== "sdg", RUN+= "/ bin/raw / dev/raw/raw6 N"
ACTION== "add", KERNEL== "sdh", RUN+= "/ bin/raw / dev/raw/raw7 N"
ACTION== "add", KERNEL== "raw [1-7]", OWNER= "grid", GROUP= "oinstall", MODE= "660"
Restart udev and check
[root@rac1 ~] # start_udev
Starting udev: [OK]
[grid@rac1 ~] $cd / dev/raw
[grid@rac1 raw] $ll
Total 0
Crw-rw---- 1 grid oinstall 162, 1 Jun 10 23:44 raw1
Crw-rw---- 1 grid oinstall 162, 2 Jun 10 23:43 raw2
Crw-rw---- 1 grid oinstall 162, 3 Jun 10 23:43 raw3
Crw-rw---- 1 grid oinstall 162, 4 Jun 10 23:43 raw4
Crw-rw---- 1 grid oinstall 162, 5 Jun 10 23:43 raw5
Crw-rw---- 1 grid oinstall 162, 6 Jun 10 23:43 raw6
Crw-rw---- 1 grid oinstall 162, 7 Jun 10 23:43 raw7
Crw-rw---- 1 root disk 162, 0 Jun 10 23:43 rawctl
The graphical interface creates OCRDG, which is relatively simple and omitted:
[grid@rac1 raw] $asmca
After the graphical interface is created successfully, click mount all, and verify whether the newly created disk group has been mount:
[grid@rac1 raw] $sqlplus / as sysasm
SQL > col name for A20
SQL > select NAME,GROUP_NUMBER,STATE,type,TOTAL_MB,FREE_MB,usable_file_mb,VOTING_FILES from v$asm_diskgroup
NAME GROUP_NUMBER STATE TYPE TOTAL_MB FREE_MB USABLE_FILE_MB VOT
OCRVOTING 1 MOUNTED EXTERN 2997 2597 2597 N
RAC_DATA 2 MOUNTED EXTERN 9993 8077 8077 N
OCRDG 3 MOUNTED NORMAL 3000 2715 1310 N
You can see that the newly created ocrdg disk group has a status of mount and a type of normal.
Disk status:
SQL > select GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,NAME,VOTING_FILE from v$asm_disk
GROUP_NUMBER DISK_NUMBER MOUNT_STATUS NAME VOT
1 2 CACHED OCRVOTING_0002 N
1 1 CACHED OCRVOTING_0001 N
1 0 CACHED OCRVOTING_0000 Y
2 0 CACHED RAC_DATA_0000 N
3 2 CACHED OCRDG_0002 N
3 1 CACHED OCRDG_0001 N
3 0 CACHED OCRDG_0000 N
7 rows selected.
You can also verify it with the following command:
[grid@rac1 raw] $asmcmd lsdg
Asm pfile file location:
[grid@rac1 ~] $asmcmd spget
+ OCRVOTING/rac-cluster/asmparameterfile/registry.253.952478315
OCR,OLR before backup, single node:
[root@rac1 rules.d] # ocrconfig-manualbackup
[root@rac1 rules.d] # ocrconfig-local-manualbackup
Add the new disk group to OCR,root users:
[root@rac1 rules.d] # / tpsys/app/11.2.0/grid/bin/ocrconfig-add + OCRDG
Check the crs log, and the command line confirms:
Crs log at this time:
2018-06-1206 is adjusted from 1915 30.517: [OCRRAW] [2996766464] propriowv_bootbuf: Vote information on disk 1 [+ OCRDG] is adjusted from [0] to [1]
2018-06-1206 is adjusted from 1915 30.546: [OCRRAW] [2996766464] propriowv_bootbuf: Vote information on disk 0 [+ OCRVOTING] is adjusted from [2max 2] to [1max 2]
[root@rac1 rules.d] # ocrcheck
Status of Oracle Cluster Registry is as follows:
Version: 3
Total space (kbytes): 262120
Used space (kbytes): 2596
Available space (kbytes): 259524
ID: 827782161
Device/File Name: + OCRVOTING
Device/File integrity check succeeded
Device/File Name: + OCRDG
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
Kick out the old disk group:
[root@rac1 rules.d] # / tpsys/app/11.2.0/grid/bin/ocrconfig-delete + OCRVOTING
Check the crs log, and the command line confirms:
Crs log at this time:
2018-06-1206 is adjusted from 28 OCRRAW 07.526: [OCRRAW] [2986276608] propriowv_bootbuf: Vote information on disk 1 [] is adjusted from [1ama 2] to [2max 2]
2018-06-12 0614: [OCRASM] [2986276608] proprasmo: ASM cache size is [5MB]
2018-06-1206 ASM cache 28 ASM cache 07.647: [OCRASM] [2986276608] proprasmo: ASM cache [5MB] enabled for disk group [OCRDG].
2018-06-1206 proprioo: for disk 0 (+ OCRDG), id match (1), total id sets, (2) need recover (0), my votes (2), total votes (2), commit_lsn (68), lsn (68)
[root@rac1 rules.d] # ocrcheck
Status of Oracle Cluster Registry is as follows:
Version: 3
Total space (kbytes): 262120
Used space (kbytes): 2596
Available space (kbytes): 259524
ID: 827782161
Device/File Name: + OCRDG
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
Replace vote disk:
[root@rac1 rules.d] # crsctl query css votedisk
# # STATE File Universal Id File Name Disk group
1. ONLINE 0c9ec99614ed4fe4bfdba4cb520dd00e (/ dev/raw/raw1) [OCRVOTING]
Located 1 voting disk (s).
[root@rac1 rules.d] # / tpsys/app/11.2.0/grid/bin/crsctl replace votedisk + OCRDG
Successful addition of voting disk 0a80756b9eb44f9abfae577e0e8ed2dd.
Successful addition of voting disk 000808a364544f11bf48969149e2bf2a.
Successful addition of voting disk 8714d5dd04634f70bf3bbd8ae1a467ac.
Successful deletion of voting disk 0c9ec99614ed4fe4bfdba4cb520dd00e.
Successfully replaced voting disk group with + OCRDG.
CRS-4266: Voting file (s) successfully replaced
Authentication, root user:
[root@rac1 rules.d] # crsctl query css votedisk
# # STATE File Universal Id File Name Disk group
1. ONLINE 0a80756b9eb44f9abfae577e0e8ed2dd (/ dev/raw/raw5) [OCRDG]
2. ONLINE 000808a364544f11bf48969149e2bf2a (/ dev/raw/raw6) [OCRDG]
3. ONLINE 8714d5dd04634f70bf3bbd8ae1a467ac (/ dev/raw/raw7) [OCRDG]
Located 3 voting disk (s).
Create an asm pfile to a new disk group:
[grid@rac1 ~] $sqlplus / as sysasm
SQL > create pfile='/tmp/asmpfile.ora' from spfile
File created.
SQL > create spfile='+OCRDG' from pfile='/tmp/asmpfile.ora'
File created.
[grid@rac1 ~] $asmcmd spget
+ OCRDG/rac-cluster/asmparameterfile/registry.253.978589927
Restart the cluster with the new pfile file:
Crsctl stop crs
Crsctl start crs
[grid@rac2] $crsctl stat res-t
NAME TARGET STATE SERVER STATE_DETAILS
Local Resources
Ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Ora.OCRDG.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Ora.OCRVOTING.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Ora.RAC_DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
Ora.eons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
Ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
Cluster Resources
Ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
Ora.oc4j
1 OFFLINE OFFLINE
Ora.rac1.vip
1 ONLINE ONLINE rac1
Ora.rac2.vip
1 ONLINE ONLINE rac2
Ora.ractest.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
Ora.scan1.vip
1 ONLINE ONLINE rac1
Successfully completed!
At this point, I believe you have a deeper understanding of "how oracle moves OCR, Vote File, and ASM SPILE to a new disk group". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.