Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The execution process of SQL statement in Oracle Database

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains the "Oracle database SQL statement execution process", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn "Oracle database SQL statement execution process" bar!

1. When the user process executes the SQL statement on the client, the client will send the SQL statement to the server and let the process on the server handle the statement. In other words, the Oracle client will not do anything, and its main task is to send some SQL statements generated by the client to the server.

2. After the server process receives the information from the user process, it wants the process to allocate the required memory in PGA and store relevant information, such as storing relevant login information in session memory. Although there is a database process on the client side, the role of this process is different from that of the process on the server, and the database process on the server will deal with the SQL statement. Of course, the process of the client corresponds to that of the server one by one. In other words, after the client connects to the server, a process will be formed on both the client and the server. We are called the client process on the client side and the server process on the server.

3. When the client transmits the SQL statement to the server, the server process parses the statement. This parsing work is carried out on the server side, and the parsing process can be refined.

1) query cache (library cache)

When the server process receives the SQL statement sent by the client, it does not go directly to the database to query. The server process converts the characters of the SQL statement into the ASCII equivalent numeric code, which is then passed to a HASH function and returns a hash value, and then the server process goes to the library cache (cache) in the shared pool to find out if the same hash value exists. If present, the server process will use the parsed version of the statement that has been cached in shared pool's library cache to execute, eliminating subsequent parsing work, which is called soft parsing. If it does not exist in the cache, you need to do the next step, which is hard parsing. Hard parsing is usually an expensive operation, accounting for about 70% of the time performed by the entire SQL, which generates execution trees, execution plans, and so on.

Therefore, using high-speed data cache can improve the query efficiency of SQL statements. There are two reasons for this: on the one hand, it is more efficient to read data from memory than from data files on the hard disk, and on the other hand, it saves time by avoiding statement parsing.

It should be noted here, however, that this data cache is different from the data caching of some client software. In order to improve the query efficiency, some client software will set up data cache on the client side of the application software. Due to the existence of these data caches, the query efficiency of client application software can be improved. However, if others make relevant changes on the server, the modified data can not be reflected on the client in time due to the existence of the application software data cache. It can also be seen that the data cache of the application software is not the same thing as the high-speed data cache of the database server.

2) statement validity check (data dictionary cache)

When the corresponding SQL statement is not found in the cache, the server process starts to check the validity of the statement. The main purpose here is to check the syntax of the SQL statement to see if it conforms to the grammatical rules. If the server process thinks that the SQL statement does not conform to the syntax rules, it will feed back the error message to the client. During this syntax check, the table names, column names, and so on contained in the SQL statement are not checked, only the syntax is checked.

3) language meaning check (data dictionary cache)

If the SQL statement conforms to the grammatical definition, the server process then parses the tables, indexes, views and other objects involved in the statement, and checks the names and related structures of these objects against the data dictionary to see if these fields, tables, views, and so on are in the database. If the table name and column name are not accurate, the database will feedback the error message to the client.

Therefore, sometimes when we write select statements, if the syntax and table name or column name are miswritten at the same time, the system will first prompt for syntax error, and then prompt for column name or table name error after the syntax is completely correct.

4) acquire object parsing lock (control structer)

When the syntax and semantics are correct, the system will lock the objects we need to query. This is mainly to ensure the consistency of the data and prevent other users from changing the structure of this object during the query process.

5) check of data access permissions (data dictionary cache)

When the syntax and semantics pass the check, the client may not be able to get the data, and the server process will also check whether the connected user has the right to access the data. If the user does not have access to the data, the client cannot obtain the data. It is important to note that the database server process checks syntax and semantics before checking access permissions.

6) determine the best implementation plan

When the syntax and semantics have no problem and the permissions match, the server process still does not query the database file directly. The server process optimizes this statement according to certain rules. Before executing the plan development, there will be a step of query transformation, such as view merge, sub-query nesting, predicate advance and materialized view rewriting query. In order to determine which execution plan to adopt, Oracle also needs to collect statistics to determine the access join method of the table, and finally determine the lowest possible cost execution plan.

Note, however, that this optimization is limited. In general, in the process of application software development, it is necessary to optimize the sql statements of the database, which is much greater than the self-optimization of the server process.

When the optimizer of the server process determines the best execution plan for the query statement, the SQL statement and the execution plan are saved to the data cache (library cache). In this way, when there is still this query in the future, the above steps of syntax, semantics and permission checking will be omitted, and the SQL statement will be executed directly to improve the efficiency of SQL statement processing.

4. Bind variable assignment

If a bound variable is used in the SQL statement, scan the declaration of the bound variable, assign a value to the bound variable, and bring the variable value into the execution plan. If SQL exists in the cache in the first step of parsing, skip to that step directly.

5. Statement execution

Statement parsing simply parses the syntax of the SQL statement to ensure that the server knows exactly what the statement means. The database server process will not actually execute the SQL statement until the parsing of the statement is complete.

For SELECT statements:

1) first, the server process determines whether the required data exists in the db buffer. If it exists and is available, it directly obtains the data instead of querying it from the database file, and increases its access count according to the LRU algorithm.

2) if the data is not in the buffer, the server process will query the relevant data from the database file and put the data into the data buffer (buffer cache).

Among them, if the data exists in the db buffer, the availability check method is as follows: check whether there is a transaction in the head of the db buffer block, and if there is a transaction, read the data from the rollback segment; if there is no transaction, compare the scn of the select and the scn of the db buffer block header, if the former is smaller than the latter, still read the data from the rollback segment; if the former is greater than the latter, it means that this is a non-dirty cache, and you can directly read the contents of the db buffer block.

For UPDATE statements (insert, delete, update):

1) check whether the required database has been read into the buffer cache. If the buffer cache already exists, perform step 3 directly

2) if the required database is not in the buffer cache, the server reads the data block from the data file into the buffer cache

3) Row Exclusive Lock the data rows of the table you want to modify, and then acquire exclusive locks on the data rows that need to be modified

4) copy the Redo record of the data to redo log buffer

5) generate undo data for data modification

6) modify db buffer

7) dbwr writes modifications to the data file

In step 2, the server reads the data from the data file to db buffer through the following steps:

A) first, the server process will request TM locks in the table header (to ensure that other users cannot modify the structure of the table during the execution of this transaction), and then request some row-level locks (TX locks) if TM locks are successfully added. If both TM and TX locks are successfully added, then the data will be read from the data file.

B) before reading the data, prepare the buffer space for the read file. The server process needs to scan the LRU list to find the free db buffer. During the scan, the server process will register all the modified db buffer found in the dirty list. If the buffers of free db buffer and non-dirty data blocks are insufficient, dbwr will trigger dbwr to write the buffer blocks pointed to in dirty buffer to the data file and clean these buffers to make room to buffer the newly read data.

C) if enough free buffer is found, the server process will read in each data block (db block) in which these rows are located from the data file (DB BLOCK is the minimum operating unit of ORACLE, even if the data you want is only one or more of the many rows in DB BLOCK, ORACLE will read all rows in this DB BLOCK into Oracle DB BUFFER) into the free area of db buffer or overwrite the non-dirty block buffer that has been squeezed out of LRU list And arranged in the head of the LRU list, that is, before the data block is put into the db buffer, you have to apply for the latch in the db buffer, and successfully add the lock before you can read the data to the db buffer.

If the data block already exists in db buffer cache (sometimes called db buffer or db cache), even if a non-dirty cache data block with no transaction is found in db buffer and the SCN is smaller than itself, the server process still has to go to the head of the table to apply for a lock on the record, and the locking is successful before the subsequent action can be carried out. If it is not successful, you have to wait for the previous process to unlock before performing the action (this time blocking is tx lock blocking).

When keeping a redo log, the specific steps are as follows:

1) after the data is read into the db buffer, the server process writes the rowid of these rows of data affected by the statement and read into the db buffer, the original and new values to be updated and scn and other information from the PGA into the redo log buffer one by one. It is also necessary to request the latch of the redo log buffer before writing the redo log buffer, and write only after the lock is successfully added.

2) when the write reaches 1/3 of the redo log buffer size or when the write volume reaches 1m or more than three seconds, or when a checkpoint occurs or before the dbwr, the lgwr process is triggered to write the redo log buffer data to the redo file file on disk (at this time, a log file sync wait event occurs).

3) the latches held by the redo log buffer that have been written to the redo file will be released and can be overwritten by subsequent writes, and the redo log buffer is recycled. Redo file is also recycled, and when one redo file is full, the lgwr process automatically switches to the next redo file (at which point a log file switch (check point complete) wait event may occur). In the case of archiving mode, the archiving process also writes the contents of the previous full redo file file to the archived log file (log file switch (archiving needed) may occur at this point.

When establishing undo information for a transaction, the specific steps are as follows:

1) after completing the redo log buffer associated with the transaction, the server process begins to rewrite the transaction list of the block header of the db buffer and write it to the scn (at the beginning, the scn is written in redo log buffer, not in db buffer).

2) then the copy contains the header transaction list of the block and a data copy of the scn information into the rollback segment, and the information in the rollback segment is called the "pre-image" of the data block, which is used for future rollback, recovery, and consistent reads. (rollback segments can be stored in a dedicated rollback tablespace, which consists of one or more physical files and is dedicated to rollback tablespaces, and rollback segments can also be opened up in data files in other tablespaces).

When the modification information is written to the data file, the specific steps are as follows:

1) overwrite the data content of the db buffer block and write the address of the rollback segment in the head of the block.

2) put the db buffer pointer into the dirty list. If a row data is update multiple times without commit, there will be multiple "front images" in the rollback segment. Except for the first "front image" containing scn information, the header of each "front image" has scn information and a "front image" rollback segment address. A update corresponds to only one scn, and then the server process establishes a pointer to the db buffer block in the dirty list (so that the dbwr process can find the db buffer block of the dirty list and write it to the data file). Then the server process will continue to read the second data block from the data file, repeat the action of the previous data block, read the data block, log it, establish a rollback segment, modify the data block, and put it into the dirty list.

3) when the length of the dirty queue reaches the threshold (usually 25%), the server process will notify the dbwr to write out the dirty data, that is, release the latch on the db buffer to free up more free db buffer. It has been explained that oracle reads one block at a time. In fact, oracle can read more than one block at a time (db_file_multiblock_read_count sets the number of blocks to be read at a time)

When performing commit, the specific steps are as follows:

1) commit triggers the lgwr process, but does not force dbwr to release locks on all corresponding db buffer blocks immediately. In other words, it is possible that although it is already commit, dbwr is still writing the data blocks involved in this sql statement for a period of time. The row lock in the table header is not released immediately after the commit, but is not released until the dbwr process is completed, which may lead to an unsuccessful user's request for another user's already commit resource.

2) the time between the end of the Commit and the dbwr process is very short, if the power is cut off just after the commit and before the end of the dbwr, because the data after commit already belongs to the content of the data file, but this part of the file is not fully written to the data file. So we need to get out of here. Because commit has triggered lgwr, all these changes that have not been written to the data file will be rolled forward by the smon process according to the redo log file after the instance is restarted, completing the previous outstanding work of commit (that is, writing the changes to the data file).

3) if the power is cut off without commit, because the data has been changed in db buffer and there is no commit, it means that this part of the data does not belong to the data file. Since lgwr is triggered before dbwr, that is, as long as the data is changed (there must be log first), all changes made by dbwr on the data file will be recorded in the redo log file first, and after the instance is restarted, the SMON process will roll back according to the redo log file.

In fact, the rollforward rollback of smon is based on the checkpoint. when a full checkpoint occurs, first let the LGWR process write all the buffers in the redologbuffer (including uncommitted redo information) to the redo log file, and then let the dbwr process write the committed buffers of dbbuffer to the data file (do not force to write uncommitted). Then update the SCN of the control file and the data file header, indicating that the current database is consistent, with many transactions, committed and uncommitted, between the two adjacent checkpoints.

When performing rollback, the specific steps are as follows:

The server process will find the corresponding pre-modified copy in the rollback segment according to the transaction list and SCN of the header of the block in the data file and db buffer and the address of the rollback segment, and use these original values to restore the modified but uncommitted changes in the current data file. If there is more than one "front image", the server process will find the rollback segment address of the "front image" in the header of one "front image" until it finds the earliest "front image" under the same transaction. Once the commit is issued, the user cannot rollback, which guarantees the subsequent actions that have not been fully completed by the dbwr process after commit.

Thank you for your reading, the above is the content of "the execution process of SQL statements in Oracle database". After the study of this article, I believe you have a deeper understanding of the implementation process of SQL statements in Oracle database, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report