In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The COMMIT statement ends the current transaction, making its changes permanent and visible to other users.
Commit simply triggers lgwr to write log buffer data to the online redo log and the scn of the transaction that committed the data will be recorded in the control file. Note that the scn in the data file remains unchanged at this time, and COMMIT will not trigger any checkpoint.
The DBWn process writes dirty buffers to disk under the following conditions:
-- When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers, it signals DBWn to write. DBWn writes dirty buffers to disk asynchronously if possible while performing other processing.
-- DBWn periodically writes buffers to advance the checkpoint
The DBWn process writes dirty buffers to disk under the following conditions:
When the server process cannot find a clean reusable buffer after scanning the threshold number of buffers, it will write the DBWn signal. When performing other processing, DBWn writes the asynchronous buffer to disk if possible.
-write buffers periodically to advance checkpoints
So Commit has nothing to do with Dbwr.
Some data blocks have not been committed by the transaction that produced them, but they have been written back to the data file by DBWn, and of course they have been written into redo in advance. (dbwr pre-writing protocol, that is, before a certain data is written to redo, dbwr will occur, then you have to wait for lgwr to write data to redo. Writing redo contains more than one condition of commit.) DBWn cleaning up dirty data blocks never has anything to do with whether the transaction is commit or not. As long as there is no free memory, these dirty blocks will be given to flush into the data file. These uncommitted transactions can be written to the data file, even if it is already in your data file. During the transaction recovery phase, the data will be rolled back as needed, and the data will already be junk after transaction recovery (when the transaction modifies the data, a copy of the data before modification and the SCN of the beginning of the transaction will be saved in the rollback segment. If the transaction does not have commit, the segment header will be marked as ITL. If commit, the SCN of the commit time will be written to the data block, and the rollback will be located according to whether the segment is commit or not. If you do not have commit, you will directly find the original scn and front image of the session in undo and roll back directly. You will not go to undo one by one, otherwise 10G will have been written to the data file. How long will it take to roll back?)
It has been experimented that a large number of insert with 10G capacity has been operated for 30 minutes, but without commit, you will find that redo log keeps switching to generate archive logs, and datafile keeps increasing. When I directly shutdown abort,startup, I find it is very fast, and it doesn't take 30 minutes. The rollback process should be like this: the database records the latest SCN, the SCN of the incremental checkpoint, and the maximum SCN of the redo log. Through the SCN of the incremental checkpoint, it starts to apply the redo log to the maximum SCN of the redo log and even the latest SCN, so that the rollforward is completed. During the rollback, the initial scn and the front image of the session in the undo are read directly, and the data blocks are not undo one by one. Otherwise, 10 gigabytes have already been written to the data file. How long will it take to roll back?
If you read a data block, the data block either comes directly from datafile or from Buffer Cache. If it comes from datafile, it should also be read into the Buffer Cache in SGA, that is, a physical read must produce a logical read. There will be a SCN of commit after the last modification of the data block on the data block.
If a transaction needs to modify the data in the data block, it will first save a copy of the data before the modification and the SCN of the beginning of the transaction in the rollback segment, then update the data of the data block in the Buffer Cache, mark the TIL of the segment header if there is no commit, and update the SCN after the commit to the data block if it has already been commit. When other processes read the block, they first compare the SCN on the block with their own SCN. If the SCN on the data block is less than or equal to the SCN of the process itself, the data on the data block is read directly; if the SCN on the data block is greater than the SCN of the process itself, the modified block read data is found in the rollback segment.
Consistent reading understanding of Oracle: when a statement reads data fast, if it finds that the data block is modified during its reading (the SCN on the data block is greater than or equal to the SCN of the reading process itself), it will not read the data directly from the data block, but read the data from the corresponding undo. This ensures that the end result should be a snapshot at the beginning of the read operation, unaffected by other transactions during the read. Of course, if the data in undo is overwritten, there will be an error ORA-01555: snapshot is too old
Understand when Oracle produces ORA-01555: snapshot over old error
Suppose you have a testdb table with 60 million rows of data. It is estimated that it will take 2 hours for testdb to scan the whole table once. The reference process is as follows:
1. At 1 o'clock, user A sends out select * from testdb;. No matter how the testdb changes in the future, the correct result should be that user A will see the content at 1 o'clock.
2. At 01:30, user B executes the update command to update the record in row 41 million of the testdb table. At this time, user A's full table scan has not yet reached 41 million. There is no doubt that at this time, the record of line 41 million is written to the rollback segment, assuming that it is the rollback segment UNDOTS1, and if user A's full table scan reaches row 41 million, it should correctly read the content at 1 o'clock from the rollback segment UNDOTS1.
3. At this time, user B submits what he has just done, but at this time, the system can still provide user A with the correct data, because the content of the 41st millionth line is still in the rollback segment UNDOTS1, and the system can find the correct data in the rollback segment according to SCN, but pay attention to it. By this time, the 41 million line of record recorded in UNDOTS1 has changed significantly: the data of line 41 million in the rollback segment UNDOTS1 may be overwritten at any time, because the record has already been submitted!
4. As the query time of user An is long, and the business is going on all the time, the UNDOTS1 rollback segment is being used by several different transaction. The extent in this rollback segment loops to the extent where the 41st million rows of data are located. As this record has been marked and submitted, this extent can be overwritten by other transaction!
5. By 01:45, user A's query has finally reached row 41 million, and by this time, the situation mentioned in Article 4 has already occurred. You need to go to the rollback section UNDOTS1 to find the data, but it has been overwritten, and then there is an ORA-01555 error.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 221
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.