In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Background: customers have two sets of larger databases (one is 10T (A database) and the other is 20T (B database)) want to migrate storage, version 12.2.0.1, using ASM management disk, single instance. The operating systems are linux 6 and linux 7 respectively. The customer wants to migrate the storage of the B database to poor storage for read-only use. After vacating the storage of B database, the storage occupied by B database with better performance is allocated to A database, and then the data of A database is migrated to the newly allocated storage with better performance.
The storage of A database is a mashup of better storage and poor storage, which needs to be migrated twice (the first migration frees up storage space, and the second migration to newly recycled storage). There are a total of three migrations here. Neither of the two databases was backed up because of the large database and the disk space required for backup.
On April 17, the customer told me that the leader wanted to be able to migrate the two libraries next Tuesday (I will write the migration plan). There is no time to write a plan at all. Both databases need to be migrated at the same time. So our colleague in charge of storage rushed over and delimited the storage for the two hosts. I did the multipath mapping and created the ASM disk group. Stop the library at the same time and start the migration.
On the evening of April 18, the copy of the A database data file was completed. April 19th adjust OCR, log files, temporary files, etc. During this period, B database through RMAN to do COPY has completed nearly 16T, there are about 4T data files left. An after the database adjustment is completed, the operating system is rebooted in order to verify whether the modification is correct.
After the operating system got up, it was found that the database did not work, and the newly mounted ASM disk group was always offline. Try manual mount with an error:
ORA-15032: not all alterations performed
ORA-15038: disk'/ dev/mapper/newdisk01' mismatch on 'Time Stamp' with target disk group [2434962992] [2434985720]
ORA-15038: disk'/ dev/mapper/newdisk02' mismatch on 'Time Stamp' with target disk group [2434962992] [2434985720]
ORA-15038: disk'/ dev/mapper/newdisk03' mismatch on 'Time Stamp' with target disk group [2434962992] [2434985720]
ORA-15038: disk'/ dev/mapper/newdisk04' mismatch on 'Time Stamp' with target disk group [2434962992] [2434985720]
ORA-15038: di
Through Google, MOS search, it may be that the disk has been used by other ASM instances, or it may be a problem of multipath configuration. Try to modify asm_diskstring, try manual mount, this time the error has changed:
WARNING: Disk Group NEWDATA containing spfile for this instance is not mounted
ORA-15032: not all alterations performed
ORA-15038: disk'/ dev/mapper/3600c0ff0003af211de24985e01000000' mismatch on 'Time Stamp' with target disk group [2434985720] [2434962992]
You can see that the disk of the prompt has changed. I didn't join the ASM disk group on this disk, so why do you also prompt for this? I speculate that it may be related to the original asm_diskstring='/dev/mapper/*'. Although the disk is not added to the disk group, the disk head is scanned when the ASM instance is started. Scan ASM disks with kfed, amdu and other tools, there is no problem. At the same time, amdu can read the data file. In this way, there is no risk of losing data (here is already scared to pee).
According to the MOS documentation, Doc ID 2643105.1 has a similar symptom. A disk that is not added to the ASM disk group prevents the disk group from starting normally and also occurs after the host is rebooted:
SQL > alter diskgroup ACFS mount
Alter diskgroup ACFS mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "ACFS" cannot be mounted
ORA-15040: diskgroup is incomplete
ORA-15038: disk'/ dev//asm2' mismatch on 'Time Stamp' with target
Disk group [2026420570] [2026683750]
ORA-15038: disk'/ dev//asm1' mismatch on 'Time Stamp' with target
Disk group [2026420570] [2026683750]
ORA-15038: disk'/ dev//asm0' mismatch on 'Time Stamp' with target
Disk group [2026420570] [2026683750]
A Disk at OS level has this diskgroup information in its header incorrectly:
$GI_HOME/bin: > kfed read
Kfdhdb.dskname: ACFS_4; 0x028: length=11
Kfdhdb.grpname: ACFS; 0x048: length=6 > Here
Kfdhdb.fgname: ACFS_4; 0x068: length=11
Disks at OS level should not be updated manually after they have been added to a diskgroup. This will cause diskgroup to fail to mount. In this case, the bad disk was only on 1 node in the cluster. The Diskgroup mounted successfully on the other nodes.
ASM tried to mount the diskgroup with the bad OS disk:
SQL > ALTER DISKGROUP ACFS MOUNT / * asm agent * / / * {1buret 52280 asm agent 1676} * /
NOTE: cache registered group ACFS number=1 incarn=0x47016be4
NOTE: cache began mount (not first) of group ACFS number=1 incarn=0x47016be4
NOTE: Assigning number (1B4) to disk (/ dev//asm04) > Here
However, only 3 disks at OS level are associated with this diskgroup in the ASM:
0 30 CLOSED MEMBER ONLINE NORMAL 245760 0 0 / dev//asm0
0 31 CLOSED MEMBER ONLINE NORMAL 245760 0 0 / dev//asm1
0 32 CLOSED MEMBER ONLINE NORMAL 245760 0 0 / dev//asm2
The device, / dev//asm04, is not listed.
The solution given by MOS is to dd the disk head of the problem disk. Indeed, I succeeded in manually mount the disk group after the dd disk head. But tragically, the RMAN Copy script for database B went wrong.
RMAN-03009: the backup command failed on the C1 channel at 19:13:28 in the 04Accord 19amp 2020.
ORA-19502: write error on file "+ NEWDATA/TESTDB/DATAFILE/tbs_pdata.697.1038107047", block number 165696 (block size = 16384)
ORA-15079: the ASM file is closed
ORA-15079: the ASM file is closed
ASM log:
NOTE: AMDU dump of disk group NEWDATA initiated at / u01/app/grid/diag/asm/+asm/+ASM/trace
Errors in file / u01/app/grid/diag/asm/+asm/+ASM/trace/+ASM_arb0_29651.trc (incident=14697):
ORA-15335: ASM metadata corruption detected in disk group 'NEWDATA'
ORA-15130: diskgroup "NEWDATA" is being dismounted
ORA-15066: offlining disk "NEWDATA_0011" in group "NEWDATA" may result in a data loss
ORA-15196: invalid ASM block header [kfc.c:29757] [endian_kfbh] [2147483659] [1] [0! = 1]
ORA-15196: invalid ASM block header [kfc.c:29757] [endian_kfbh] [2147483659] [1] [0! = 1]
Confirm on the B database host that the disk dropped by dd happens to be a member of the new disk group added by B database.
No, we have to ask the storage engineer to disconnect this disk from the mapping of the A database host and check to see if there are other disks mapped to multiple hosts.
For the migration of library B, you have to recreate the disk group and start over.
Lesson: 1. This kind of urgent and complicated migration is likely to have an accident. Learn to say no.
two。 After the storage is mapped, you should verify that the same disk is not mapped between the two hosts.
3. A backup must be made. Slow recovery is one thing, but at least the data will not be lost and there will be less fear.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.