In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
In fact, to help a lot of friends to solve the problem of Oracle database data out of sync, seemingly simple analysis of the reasons are also varied. For example:
A summary of Oracle database problems when you look at some databases that are not maintained by professional DBA, you will find a lot of potential problems, some of which may be harmless and seem to be non-standard, but will not directly cause problems, while some problems will make people chill on the back, just as it is said in the lyrics, once missed, it will not be any more, here is the data So I also hope that you can get inspiration and reference in some cases to avoid repeating it in your own system.
First of all, although I have typed the command under the Oracle command line, the complete command and train of thought are still clear, so we should lay a good foundation in our usual work, and do not be kidnapped by graphic tools and high-end tools. When something goes wrong, it is true to be able to pick up the Swiss Army knife in your hand.
This time to help friends to see the problem, the phenomenon is still the same, the data is out of sync, unable to log in, unable to start the data out of sync. There is indeed a lot of willingness for this kind of problem, which may be due to insufficient space at the system level, insufficient space in the flashback area, insufficient table space, and so on.
Of course, simply confirm the problem, just say that there is a problem with data synchronization, in the face of various possibilities, can only let the log tell the direction.
This is an one-master, one-standby environment, 11gR2 version, enabled ADG, quickly check the main database, found that business processing is normal, and check the database log did not find any space-related error messages. So soon the system level of the main library, the possibility of table space was ruled out.
Then it may be the space or logical space overflow on the standby side, so log in to the slave library to confirm and find that it is flashed back and overflowed.
The flashback area of Oracle is actually a bit tangled. In many cases, the flashback area of the repository is not automatically recycled, resulting in a slow overflow, resulting in a lot of serious problems. This is the case with this library. The problem drags on for a period of time, resulting in that the retention period of the control file has been exceeded.
And the weird thing is that there seems to be a little change in the network of the main and standby libraries, which makes the problem even worse.
In the face of this situation, how to deal with it? a direct solution is to delete the redundant archive files in the flashback area, or to enlarge the flashback area. To be on the safe side, if there is enough space, it is recommended to enlarge the flashback area. If some data has not been synchronized, we will be very passive after we delete it.
Of course, after I enlarged the flashback area, I found that there was a new problem. The original archive was broken. For example, the serial number of the archive is from 7000 to 10000. If 7213 of the archived files are lost, then 7213 of the subsequent archive files cannot be directly applied. And if we add to the snow and delete the unapplied archive files, it will be troublesome.
So I compared the archive logs in the breakpoint time range between the master database and the slave database with a fluke mentality, and found that there are these archive files on the master database, so I can copy them directly to the slave database. However, this process cannot trigger automatic application, because the naming format of the archive log of the master and slave database is different.
For example, if the main library is 1_7213_8980808sa.dbf and the standby database is 1_7213_20180308_89131231.dbf, we need to apply the log manually.
Alter database register logfile 'xxxxx/xxx.dbf'
Just to my delight, I found that the problem turned out to be even worse than I thought. although the breakpoint problem was fixed, a series of problems were found later, and a large number of archives were still missing.
At this point, finding out why the archive was lost is a much higher priority to fixing the current problem than fixing the problem, so I briefly assessed the problem.
There are thousands of missing archives, unless I write an automated script to automatically copy, automate the application of archiving log files, and make the script look powerful enough, plus debugging for at least an hour.
If we do a subtraction, we directly re-build the library, the whole process will be smoother.
I made an assessment based on the amount of data, and if the bandwidth is guaranteed, it should be done within an hour, so confirm the implementation steps and start the operation.
The first is to stop the preparation of the library.
This simple operation, unexpectedly, the library hang lives, of course, I took a look at the protection mode in advance, here is the maximum high available mode, that is, you can make a tradeoff between the maximum protection mode and the maximum performance. If it is the maximum protection mode, I will increase the bromine, because this operation will directly kill the main database.
Because of the constant confirmation of roles and status, so these are also in mind, because you have to redo the data, so it is possible to shutdown abort directly.
Build up the library, using the way of duplicate is simply sour.
Rman target sys/xxxx@test01auxiliary sys/xxx@test02 nocatalog
Duplicate target database for standby from active database nofilenamecheck
The whole process is smooth, in the configuration of the active / standby relationship, I still apply my old friend DG Broker, a few simple commands can make Data Guard run normally.
Looked at the time, from the confirmation to start to do so, less than an hour, but also as expected to complete the task.
After doing some supplementary tests and fixing some potential problems, I can feel more at ease.
The idea of this case seems to be very simple, but in the actual operation, we are faced with a trading system, and it is more important to consider that if the data is repaired as soon as possible, it will not affect the existing business process, or if the unfortunate trigger bug causes database failure, the loss will outweigh the gain.
When dealing with problems, it is also stable. For example, if I am faced with a database reply that has been lost and archived, I can also consider using incremental backup to restore and other solutions, but starting with a simple and clear train of thought, rebuilding is the most stable and clearest. If there is a problem with incremental recovery, or if there are any problems with incremental backup, the pressure is considerable.
In short, if you solve the problem quickly, you are the expert, otherwise, any explanation is useless.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.