In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Today, I would like to share with you the relevant knowledge of how to understand the transaction log of the Oracle database. The content is detailed and the logic is clear. I believe most people still know too much about this knowledge, so share this article for your reference. I hope you can get something after reading this article. Let's take a look.
Redo
Redo log files (redo log file) are critical to the Oracle database, which are the database's transaction logs. Oracle maintains two types of redo log files:
Online (online) redo log files and archive (archived) redo log files. Both types of redo log files are used for recovery; the main purpose is
In case instances or media fail, they can come in handy.
If the host where the database is located is powered down, causing the instance to fail, Oracle uses the online redo log to restore the system to exactly the point in time before the power failure.
If the disk drive fails (this is a media failure), Oracle uses the archived redo log and the online redo log to set the
The data backup is restored to the appropriate point in time.
The archive redo log file is actually a copy of the "old" online redo log file that has been filled. When the system fills up the log file
The ARCH process makes a copy of the online redo log file in another location, or it can make multiple additional copies at local and remote locations.
If failure occurs due to disk drive damage or other physical failure, these archived redo log files are used to perform media recovery.
Oracle takes these archived redo log files and applies them to the backup of the data files to keep them in line with the rest of the database.
The archive redo log file is the transaction history of the database.
Measure redo
Redo management is a serial point in a database. Any Oracle instance has a LGWR, and eventually all transactions are attributed to LGWR, requiring the process to manage their redo
And COMMIT its transactions, the busier LGWR work, the slower the system will be. Test multiple solutions to a problem by seeing how much redo will be generated by an operation
You can find the best way out of it.
Views related to redo
V$MYSTAT, where the session submission information is included
V$STATNAME, this view tells us what each line in V$MYSTAT represents (the statistical name of the view).
Query the redo size statement
SELECT a.NAME
B.VALUE cur_size_byte
Round (b.VALUE / 1024, 3) | | 'KB' cur_size_kb
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower (a.NAME) LIKE'%'| | lower ('redo size') | |'%'
Database archiving mode
Database archiving is used to save copies of redo log files. Generally, the archiving mode of the database is not enabled by default during installation.
In NOARCHIVELOG mode databases, CREATE TABLE does not log except for changes to data dictionaries.
If you want to see the difference on the database in NOARCHIVELOG mode, you can replace the DROP TABLE and CREATE TABLE of table T with DROP INDEX and CREATE INDEX.
By default, these operations generate logs regardless of the mode in which the database runs.
Because different patterns can lead to different behaviors. Your production system may run in ARCHIVELOG mode.
If a large number of actions you perform will generate redo in ARCHIVELOG mode, but not redo in NOARCHIVELOG mode
You must want to find this during testing, rather than wait until the system is delivered to the user!
Check whether it is archived
Check whether the database is enabled for archiving
Select name,log_mode from v$database
Enable archiving
Startup mount
Alter database archivelog
Alter database open
Prohibition of archiving
Shutdown immediate
Startup mount
Alter database noarchivelog
Alter database open
Force logging (mandatory logging) mode:
If the database forces logging mode on, Oracle writes redo no matter what it does.
View mandatory log mode
Pass through
Select force_logging from v$database
You can see whether the mandatory log mode status is enabled in the current database
Turn on mandatory log mode
If the database mandatory logging mode is not turned on (not enabled by default), you can use the
Alter database force logging is enabled, and then Oracle writes redo no matter what operation it does, independent of other factors such as the archiving mode of the database.
Turn off forced logging mode
If the database mandatory logging mode is already turned on, you can use the
Alter database no force logging turns off forced logging mode.
Restore the database to its previous settings, and whether the database is written to redo or not is determined by other factors such as the archiving mode of the database
Disable_logging
Then there is another internal parameter inside Oracle: _ disable_logging defaults to false.
By changing it to true, you can make Oracle not record redo at all when modifying the records in the table. This parameter is very useful, usually, we only use it for performance testing.
View: show parameter disa / disable/_disable_logging
Enable: alter system set "_ disable_logging" = true scope=both
Disable: alter system set "_ disable_logging" = false
Archiving mode of the table
View the logging schema of the table
Check whether the table is logging with the following SQL:
Select table_name,logging from dba_tables where table_name='tablename'
Modify the logging schema of the table
Modify the logging status of the table sql:
Alter table table_name nologging/logging
Reduce redo writes
What is discussed in this section is what happens when the database does not turn on forced logging mode.
Object operations result in redo logs, which in some way generate much less redo than usual (that is, without using the NOLOGGING clause).
Note that there is much less "redo" here, not "no redo at all". All operations generate some redo, regardless of the log mode of the database
All data dictionary operations are logged.
How to reduce redo
The method of reducing redo in create table
When creating a table, crate table as adds the nolongging option to reduce redo, in the following format
Create table [table_name] nologging as [select expression].
The method of reducing redo by insert into
Add / * + append * / option to reduce redo writes when insert bulk data, in the following format
Insert / * + append * / into [table_name] [select expression]
Generate redo rules in database archive mode
Nologging effect in create table
Tables created in archive mode default to logging mode.
When creating a table, crate table as adds the nolongging option to reduce redo writes obviously.
Verification
The amount of redo size generated by the following two types of create table as is compared below.
SELECT a.NAME
B.VALUE cur_size_byte
Round (b.VALUE / 1024, 3) | | 'KB' cur_size_kb
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower (a.NAME) LIKE'%'| | lower ('redo size') | |'%'
Query the current redo log size and record it.
Create table test_1 as select * from test
SELECT a.NAME
B.VALUE cur_size_byte
Round (b.VALUE / 1024, 3) | | 'KB' cur_size_kb
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower (a.NAME) LIKE'%'| | lower ('redo size') | |'%'
Query the current redo log size minus the previously recorded value to calculate the redo size generated by this operation is marked as redo_1
SELECT a.NAME
B.VALUE cur_size_byte
Round (b.VALUE / 1024, 3) | | 'KB' cur_size_kb
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower (a.NAME) LIKE'%'| | lower ('redo size') | |'%'
Query the current redo log size and record it.
Create table test_2 nologging as select * from test
SELECT a.NAME
B.VALUE cur_size_byte
Round (b.VALUE / 1024, 3) | | 'KB' cur_size_kb
FROM v$statname a, v$mystat b
WHERE a.statistic# = b.statistic#
AND lower (a.NAME) LIKE'%'| | lower ('redo size') | |'%'
Query the current redo log size minus the previously recorded value to calculate the redo size generated by this operation is marked as redo_2
By comparing the size of redo_1 and redo_2, you can see the difference between crate table as adding nolongging or not adding nologging.
Add append effect to insert into
Table mode logging
When the table schema is in logging state, redo is generated regardless of whether it is append schema or no append schema, that is, the join append option does not take effect.
Verification
Comparing the amount of redo size generated by the following two types of insert, we can see that the amount of redo is about the same.
The method of calculating the redo size is the same as the above.
1. Insert / * + append * / into test_1 select * from test
Commit
2. Insert into test_1 select * from test
Commit
Table mode nologging
When the table schema is in nologging state, only the addition of append schema will significantly reduce the generation of redo.
Verification
1. Insert / * + append * / into test_1 select * from test
Commit
2. Insert into test_1 select * from test
Commit
Database non-archiving mode generates redo rules
The influence of using nologging in create table on the production of redo
Tables created in non-archive mode default to nologging mode.
In a database in NOARCHIVELOG mode, CREATE TABLE does not log except for changes to the data dictionary.
Therefore, when creating the table (crate table as), adding the nologging option to reduce redo writes is not obvious, that is, whether the nologging option is added or not is about the same.
Verification
The amount of redo size generated by the following two types of create table as is compared below.
Create table test_1 as select * from test
Create table test_2 nologging as select * from test
Append effect in insert into
Table mode logging
When the table mode is in logging state, adding append mode significantly reduces the generation of redo, while in no append mode it does not.
Verification
Insert / * + append * / into test_1 select * from test
Commit
Insert into test_1 select * from test
Commit
Table mode nologging
When the table schema is in the nologging state, the append schema reduces the generation of redo, while the no append schema does not.
Verification
Insert / * + append * / into test_1 select * from test
Commit
Insert into test_1 select * from test
Commit
In fact, it is the difference between direct loading and traditional loading.
Direct loading advantage
Direct loading is more efficient than traditional loading.
Do not scan the original empty data block
No sql parsing is required to reduce the load on the system
Do not go through SGA
Do not take the DBWR process, take their own exclusive process, so the speed is fast
Direct loading limit
Cannot load cluster table
Lock the entire table and cannot be loaded when there are active transactions on the table
Direct loading feature
Direct loading loads new data blocks behind all data blocks, modifies the high water mark, and does not scan the original empty data blocks.
Loading directly produces only a little bit of administrative redo because you have to modify the data dictionary (or not to generate redo).
Roll back and erase the newly allocated data block if the load fails.
No SGA, no SQL parsing, no DBWR process
Experiment
Now we have defined test
SQL > select count (*) from test; now has no records in the table.
COUNT (*)
-
0
SQL > select segment_name,extent_id,bytes from user_extents where segment_name='TEST'; is now assigned 1 zone
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 0 65536
[oracle@secdb1 ~] $sqlldr userid=ls/ls control=leo_test.ctl data=leo_test.data log=leo_test.log traditional way to load data
LS@LEO > select count (*) from test; has successfully loaded 1 million pieces of data
COUNT (*)
-
1000000
SQL > select segment_name,extent_id,bytes from user_extents where segment_name='TEST'; 1 million pieces of data occupy 28 regions
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 0 65536
TEST 1 65536
TEST 2 65536
TEST 3 65536
TEST 4 65536
TEST 5 65536
TEST 6 65536
TEST 7 65536
TEST 8 65536
TEST 9 65536
TEST 10 65536
TEST 11 65536
TEST 12 65536
TEST 13 65536
TEST 14 65536
TEST 15 65536
TEST 16 1048576
TEST 17 1048576
TEST 18 1048576
TEST 19 1048576
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 20 1048576
TEST 21 1048576
TEST 22 1048576
TEST 23 1048576
TEST 24 1048576
TEST 25 1048576
TEST 26 1048576
TEST 27 1048576
28 rows selected
SQL > delete from test; delete 1 million pieces of data
1000000 rows deleted.
SQL > commit; submission
Commit complete.
SQL > select segment_name,extent_id,bytes from user_extents where segment_name='TEST'
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 0 65536
TEST 1 65536
TEST 2 65536
TEST 3 65536
TEST 4 65536
TEST 5 65536
TEST 6 65536
TEST 7 65536
TEST 8 65536
TEST 9 65536
TEST 10 65536
TEST 11 65536
TEST 12 65536
TEST 13 65536
TEST 14 65536
TEST 15 65536
TEST 16 1048576
TEST 17 1048576
TEST 18 1048576
TEST 19 1048576
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 20 1048576
TEST 21 1048576
TEST 22 1048576
TEST 23 1048576
TEST 24 1048576
TEST 25 1048576
TEST 26 1048576
TEST 27 1048576
28 rows selected
Delete all the data and still take up space. Oracle's delete operation does not reclaim space, but only marks its own records for deletion. In fact, the space occupied is not released.
[oracle@secdb1 ~] $sqlldr userid=ls/ls control=leo_test.ctl data=leo_test.data log=leo_test.log loading data for the second time in traditional way
SQL > select count (*) from test; has successfully loaded 1 million pieces of data
COUNT (*)
-
1000000
SQL > select segment_name,extent_id,bytes from user_extents where segment_name='TEST'
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 0 65536
TEST 1 65536
TEST 2 65536
TEST 3 65536
TEST 4 65536
TEST 5 65536
TEST 6 65536
TEST 7 65536
TEST 8 65536
TEST 9 65536
TEST 10 65536
TEST 11 65536
TEST 12 65536
TEST 13 65536
TEST 14 65536
TEST 15 65536
TEST 16 1048576
TEST 17 1048576
TEST 18 1048576
TEST 19 1048576
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 20 1048576
TEST 21 1048576
TEST 22 1048576
TEST 23 1048576
TEST 24 1048576
TEST 25 1048576
TEST 26 1048576
TEST 27 1048576
28 rows selected
If you load the data in the traditional way, the original empty data block will be scanned and the newly loaded data will be inserted into the empty data block. Let's see if we still use the original 28 regions.
SQL > delete from test; this is the second time to delete 1 million pieces of data
1000000 rows deleted.
SQL > commit; submission
Commit complete.
SQL > select segment_name,extent_id,bytes from user_extents where segment_name='TEST'
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 0 65536
TEST 1 65536
TEST 2 65536
TEST 3 65536
TEST 4 65536
TEST 5 65536
TEST 6 65536
TEST 7 65536
TEST 8 65536
TEST 9 65536
TEST 10 65536
TEST 11 65536
TEST 12 65536
TEST 13 65536
TEST 14 65536
TEST 15 65536
TEST 16 1048576
TEST 17 1048576
TEST 18 1048576
TEST 19 1048576
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 20 1048576
TEST 21 1048576
TEST 22 1048576
TEST 23 1048576
TEST 24 1048576
TEST 25 1048576
TEST 26 1048576
TEST 27 1048576
28 rows selected
Delete still doesn't recycle space. We still occupy 28 districts.
[oracle@secdb1 ~] $sqlldr userid=ls/ls control=leo_test.ctl data=leo_test.data log=leo_test.log direct=true loads data directly
SQL > select segment_name,extent_id,bytes from user_extents where segment_name='TEST'
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 0 65536
TEST 1 65536
TEST 2 65536
TEST 3 65536
TEST 4 65536
TEST 5 65536
TEST 6 65536
TEST 7 65536
TEST 8 65536
TEST 9 65536
TEST 10 65536
TEST 11 65536
TEST 12 65536
TEST 13 65536
TEST 14 65536
TEST 15 65536
TEST 16 1048576
TEST 17 1048576
TEST 18 1048576
TEST 19 1048576
SEGMENT_NAME EXTENT_ID BYTES
-
TEST 20 1048576
TEST 21 1048576
TEST 22 1048576
TEST 23 1048576
TEST 24 1048576
TEST 25 1048576
TEST 26 1048576
TEST 27 1048576
TEST 28 1048576
TEST 29 1048576
TEST 30 1048576
TEST 31 1048576
TEST 32 1048576
TEST 33 1048576
TEST 34 1048576
TEST 35 1048576
TEST 36 1048576
TEST 37 1048576
TEST 38 1048576
TEST 39 1048576
TEST 40 1048576
TEST 41 1048576
TEST 42 1048576
TEST 43 1048576
TEST 44 1048576
TEST 45 1048576
TEST 46 1048576
TEST 47 1048576
48 rows selected
It is found that the same 1 million records take up 48 regions, while only 28 are used for traditional loading, while we use direct loading to 20 more data blocks.
By the way, direct loading does not scan the original empty data blocks, and new data blocks will be loaded after all data blocks to insert data to modify the high water mark HWM
When the transaction is committed and the high water mark is moved to the new data, other users can see it.
Compare the redo size generated by direct loading using conventional and direct (can be directly loaded through / * + append * / simulation).
Clear: direct loading combined with logging does not significantly reduce the number of redo logs
Direct loading combined with nologging can greatly reduce the amount of redo logs.
SQL > create table leo_t1 as select * from test where 1 / 2; create leo_t1 table
Table created.
SQL > alter table leo_t1 logging; sets the leo_t1 logging mode
Table altered.
SQL > set autotrace traceonly
SQL > insert into leo_t1 select * from leo_test_sqlload where rownum rollback; rollback operation, using undo tablespace
Rollback complete.
SQL > insert / * + append * / into leo_t1 select * from leo_test_sqlload where rownum alter database no force logging
SQL > select force_logging from vault database; force_logging option has been turned off
FOR
-
NO
SQL > alter table leo_t1 nologging; sets the leo_t1 nologging mode
Table altered.
SQL > select logging from user_tables where table_name='LEO_T1'
LOG
-
NO
SQL > select count (*) from leo_t1; 0 records
COUNT (*)
-
0
There is no index on SQL > select index_name from user_indexes where table_name='LEO_T1'; table
No rows selected
SQL > insert / * + append * / into leo_t1 select * from leo_test_sqlload where rownum rollback
Rollback complete.
LS@LEO > insert into leo_t1 select * from leo_test_sqlload where rownum
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.