In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
I. disk tuning
1. Standard disk array RAID10 is more suitable for OLTP system than RAID5. RAID10 mirrors the disk first, and then segments it. Because the small-scale access to data is more frequent, it is suitable for OLTP. The advantage of RAID5 is that it can make full use of disk space and reduce the total cost of the array. However, because when the array issues a write request, the modified block on the disk must be changed, the "parity" block needs to be read from the disk, and the new parity block is calculated using the modified block, and then the data is written to disk, and the throughput is limited. It has an impact on performance, and RAID5 is suitable for OLAP systems.
two。 Data file distribution separates the following things to avoid disk competition
Angular SYSTEM tablespace
Angular TEMPORARY tablespace
Angular UNDO tablespace
@ online redo log (put on the fastest disk)
The disk of the operating system
Directory for the installation of the zub ORACLE
Data files that are accessed frequently
Pyramid index tablespace
The archive area (should always be separated from the data to be recovered)
Example:
²/: System
²/ U01: Oracle Software
²/ U02: Temporary tablespace, Control file1
²/ u03: Undo Segments, Control file2
²/ u04: Redo logs, Archive logs, Control file4
²/ u05: System, SYSAUX tablespaces
²/ U06: Data1, control file3
²/ u07: Index tablespace
²/ U08: Data2
Determine the IO problem by querying the following statement
Select name, phyrds,phywrts,readtim,writetim
From v$filestat a dint vandal datafile b
Where a.file#=b.file# order by readtim desc
3. Increase the log file
U increase the size of log files, thereby increasing the proportion of large INSERT,DELETE,UPDATE operations
Query log file status
Select a.memberreb. * from v$logfile arecinct vicilog b where a.GROUP#=b.GROUP#
Query log switching time
Select b.RECID hh34:mi:ss' toggchar (b.FIRSTRISTIMEGRIMEGRIMEZOMYYKUIDD hh34:mi:ss') end_time,round (a.FIRST_TIME-b.FIRST_TIME) * 25) * 60Magne2) minutes
From v$log_history a, v$log_history b
Where a.RECID=b.RECID+1
Order by a.FIRST_TIME desc
Increase the log file size, and add log files to each group (one master file, one multiplex file)
U increase the LOG_CHECKPOINT_INTERVAL parameter, which is no longer recommended
If the log is switched less than every half hour, the online redo log size is increased. If you switch frequently when processing large batch tasks, increase the number of online redo logs.
Alter database add logfile member'/ log.ora' to group 1
Alter database drop logfile member'/ log.ora'
4. Modify three initial parameters for UNDO tablespace:
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=CLOUDSEA_UNDO
UNDO_RETENTION=
5. Do not perform sorting in the system tablespace
Second, the maximum support for 32-bit addressing for initialization parameters tuning should be 2 to the 32 power, that is, 4G size. But in practice, 32-bit systems (MS32-bit systems such as XP,windows2003, linux32-bit systems such as ubuntu) use memory remapping technology if they want to make use of 4G memory. Need the support of motherboard and system. If you turn off the remapping function of the motherboard BIOS, the system will not be able to use 4 gigabytes of memory, perhaps up to 3.5 gigabytes. What you see under windows is generally 3.25G. So SGA is set to 40% of memory, but not more than 3.25g.
1. Important initialization parameter l SGA_MAX_SIZE
L SGA_TARGET
L PGA_AGGREGATE_TARGET
L DB_CACHE_SIZE
L SHARED_POOL_SIZE
two。 Adjust DB_CACHE_SIZE to improve performance it sets the size of the SGA area used to store and process data in memory, fetching data from memory is more than 10000 times faster than disk
Find out the data cache hit rate according to the following query
Select sum (decode (name,'physical reads',value,0)) phys
Sum (decode (name,'db block gets',value,0)) gets
Sum (decode (name,'consistent gets',value,0)) con_gets
(1-(sum (decode (name,'physical reads',value,0)) / (sum (decode (name,'db block gets',value,0)) + sum (decode (name,'consistent gets',value,0))) * 100 Hitratio
From v$sysstat
A transaction handler should guarantee a hit rate of more than 95%, and an increase from 90% to 98% may improve performance by 500%. ORACLE is analyzing system performance through CPU or service time and wait time, paying little attention to the hit rate, but the current library cache and dictionary cache still use the hit rate as the basic adjustment method.
Use V$DB_CACHE_ADVICE when adjusting DB_CACHE_SIZE
Select size_for_estimate, estd_physical_read_factor, estd_physical_reads
From v$db_cache_advice
Where name = 'DEFAULT'
If the hit rate of the query is too low, the index is missing or the index is limited, and the query through V$SQLAREA view is slow to execute SQL.
3. Set DB_BLOCK_SIZE to reflect the amount of data read OLTP is generally 8K
OLAP is usually 16K or 32K
4. Tune SHARED_POOL_SIZE to optimize performanc
Adjusting this parameter correctly makes it equally possible to share SQL statements so that used SQL statements can be found in memory. In order to reduce the number of hard parsing and optimize the use of shared SQL regions, stored procedures and bound variables should be used as much as possible.
Ensure that the hit rate of the data dictionary cache is above 95%.
Select ((1-sum (getmisses) / (sum (gets) + sum (getmisses) * 100) hitratio
From v$rowcache
Where gets+getmisses 0
If the hit rate is less than 99%, you can consider increasing shared pool to improve the hit rate of library cache.
SELECT SUM (PINS) "EXECUTIONS", SUM (RELOADS) "CACHE MISSES WHILE EXECUTING", 1-SUM (RELOADS) / SUM (PINS)
FROM V$LIBRARYCACHE
The general rule is to set it at 50% Murray 150% of the DB_CACHE_SIZE size, and in systems that use a large number of stored procedures or packages but have limited memory, the final allocation is 150%. In systems that do not use stored procedures but allocate a large amount of memory to DB_CACHE_SIZE, this parameter should be 10% Mel 20%
5. Adjust PGA_AGGREGATE_TARGET to optimize memory application u OLTP: totalmemory*80%*20%
U DSS: totalmemory*80%*50%
6. 25 important initialization parameters ²DB_CACHE_SIZE: initialization memory allocated to the data cache
²SGA_TARGET: set this parameter if automatic memory management is used. Set to 0 to disable it
²PGA_AGGREGATE_TARGET: maximum PGA soft memory for all users
²SHARED_POOL_SIZE: memory allocated to data dictionaries, SQL, and PL/SQL
The maximum memory that ²SGA_MAX_SIZE:SGA can dynamically grow.
²OPTIMIZER_MODE:
²CURSOR_SHARING: converting literal SQL to SQL with binding changes reduces hard parsing overhead
²OPTIMIZER_INDEX_COST_ADJ: the index scan cost and the full table scan cost are adjusted. Setting between 1 and 10 will force the index to be used frequently to ensure index availability.
²QUERY_REWRITE_ENABLED: to enable materialized views and function-based indexing
²DB_FILE_MULTIBLOCK_READ_COUNT: for full table scans, this parameter reads multiple blocks in a single IO in order to perform IO more efficiently
²LOG_BUFFER: allocates buffers (non-dynamic parameters) for transactions that are not committed in memory
²DB_KEEP_CACHE_SIZE: memory allocated to KEEP pools or additional data caches
²DB_RECYCLE_CACHE_SIZE:
²DBWR_IO_SLAVES: if there is no asynchronous IO, the parameter is equal to the number of writers allocated from SGA to disk that DB_WRITER_PROCESSES simulates asynchronous IO. If there is an asynchronous IO, use DB_WRITER_PROCESSES to set up multiple writers to write out dirty blocks faster during DBWR
²LARGE_POOL_SIZE: total number of blocks allocated to large PLSQL or other seldom used ORACLE option LARGET pools
²STATISTICS_LEVEL: enable consultant information and optionally provide more OS statistics to improve optimizer decisions. Default: TYPICAL
²JAVA_POOL_SIZE: memory allocated for JAVA stored procedures used by JVM
²JAVA_MAX_SESSIONSPACE_SIZE: upper limit of memory used to track the user session state of the JAVA class
²MAX_SHARED_SERVERS: upper limit of shared servers when using shared servers
²WORKAREA_SIZE_POLICY: enable automatic PGA size management
²FAST_START_MTTR_TARGET: approximate time to complete a crash recovery / S
²LOG_CHECKPOINT_INTERVAL: checkpoint frequency
²OPEN_CURSORS: specifies the size of the dedicated area where user statements are saved, and setting this too high will result in ORA-4031
²DB_BLOCK_SIZE: database default block size
²OPTIMIZER_DYNAMIC_SAMPLING: controls the number of blocks read by dynamic sampling queries, which is useful for systems that are using global temporary tables
Third, SQL tuning 1. Use hint 1.1 to change the execution path
Specify the usage of the optimizer through the OPTIMIZER_MODE parameter. Default is ALL_ROWS.
The best throughput can be obtained for the @ ALL_ROWS to execute query for all rows
^ FIRST_ROWS (n) enables the optimizer to retrieve the first row as quickly as possible:
Select / * + FIRST_ROWS (1) * / store_id, … From tbl_store
1.2 Tips for using access methods
Allows developers to change the actual query mode of access, often using INDEX prompts
@ CLUSTER enforces the use of clusters
Dead FULL
Dead HASH
·INDEX syntax: / * + INDEX (TABLE INDEX1,INDEX2... .) * / COLUMN 1, … .
When no INDEX is specified, the optimizer selects the best index
SELECT / * + INDEX * / STORE_ID FROM TBL_STORE
The default is ascending INDEX_ASC 8I, so it is the same as INDEX.
Dead INDEX_DESC
Angular INDEX_COMBINE is used to specify multiple bitmap indexes instead of selecting the best of them
Z. INDEX_JOIN only needs to access these indexes, saving time for reretrieving the table
·INDEX_FFS performs a fast global scan of the index, only deals with the index, and does not access specific tables
Dead INDEX_SS
Dead INDEX_SSX_ASC
Dead INDEX_SS_DESC
Dead NO_INDEX
Dead NO_INDEX_FFS
Dead NO_INDEX_SS
1.3 use query conversion hints
It is very helpful for data warehouse.
Dead FACT
Dead MERGE
·NO_EXPAND syntax: / * + NO_EXPAND * / column1, …
Make sure that the IN list assembled by OR will not get into trouble, / * + FIRST_ROWS NO_EXPAND * /
Dead NO_FACT
Dead NO_MERGE
Dead NO_QUERY_TRANSFORMATION
Dead NO_REWRITE
Dead NO_STAR_TRANSFORMATION
Dead NO_UNSET
Dead REWRITE
Dead STAR_TRANSFORMATION
Dead UNSET
Dead USE_CONCAT
1.4 use connection operation tips
Shows how to merge the data in the join table, and you can use two tips to directly affect the join order. LEADING specifies the tables to be used first in the join order, and ORDERED tells the optimizer to join these tables sequentially based on the tables in the FROM clause, using the first table as the drive table (the table with the most row access)
ORDERED syntax: / * + ORDERED * / column 1, … .
Access the table order according to the table order after FROM
LEADING syntax: / * + LEADING (TABLE1) * / column 1, … .
Similar to ORDER, specify the driver table
Dead NO_USE_HASH
Dead NO_USE_MERGE
Dead NO_USE_NL
HASH_AREA_SIZE or PGA_AGGREGATE_TARGET with sufficient USE_HASH premise
Usually provides the best response time for larger result sets
Dead USE_MERGE
Angular USE_NL can usually return a row as quickly as possible
Dead USE_NL_WITH_INDEX
1.5 use parallel execution
Dead NO_PARALLEL
Dead NO_PARALLEL_INDEX
Dead PARALLEL
Dead PARALLEL_INDEX
Dead PQ_DISTRIBUTE
1.6 other hints
Instead of checking to see if there is any space left in the currently used block, it inserts it directly into the table and adds the data directly to the new block.
Z CACHE caches all table scans in memory, so that data can be found directly in memory without having to query on disk
Dead CURSOR_SHARING_EXACT
Dead DRIVING_SITE
Dead DYNAMIC_SAMPLING
Dead MODEL_MIN_ANALYSIS
Dead NOAPPEND
Dead NOCACHE
Dead NO_PUSH_PRED
Dead NO_PUSH_SUBQ
Dead NO_PX_JOIN_FILTER
Dead PUSH_PRED
@ PUSH_SUBQ forces subqueries to be executed first, and when subqueries quickly return a small number of rows, these rows can be used to limit the number of rows returned by external queries, which can greatly improve performance
Example: select / * + PUSH_SUBQ * / emp.empno,emp.ename
From emp,orders
Where emp.deptno= (select deptno from dept where loc='1')
Dead PX_JOIN_FILTER
Dead QB_NAME
two。 Adjust query
2.1 Select the most resource-intensive query in V$SQLAREA
The hash value of the HASH_VALUE:SQL statement.
The address of the ADDRESS:SQL statement in SGA.
PARSING_USER_ID: the user who parses the first CURSOR for the statement
VERSION_COUNT: the number of statement cursor
KEPT_VERSIONS:
Total shared memory used by SHARABLE_MEMORY:cursor
Total resident memory used by PERSISTENT_MEMORY:cursor
The total amount of runtime memory used by RUNTIME_MEMORY:cursor.
The text of the SQL_TEXT:SQL statement (up to the first 1000 characters of the statement can be saved).
MODULE,ACTION: when DBMS_APPLICATION_INFO is used, session parses the first cursor message.
SORTS: the sorted number of statements
CPU_TIME: the CPU time that the statement was parsed and executed
ELAPSED_TIME: the common time that statements are parsed and executed
PARSE_CALLS: the number of parsing calls (soft and hard) of the statement
EXECUTIONS: the number of times the statement was executed
INVALIDATIONS: the number of cursor failures of the statement
LOADS: the number of statements loaded (unloaded)
ROWS_PROCESSED: the total number of columns returned by the statement
Select b.usernameparamagorea.DISKUTIONS rds_exec_ratio,a.SQL_TEXT
From v$sqlarea a, dba_users b
Where a.PARSING_USER_ID=b.user_id and a.DISK_READS > 100 order by a.DISK_READS desc
2.2 Select the most resource-intensive query in V$SQL
Similar to V$SQLAREA
Select * from
(select sql_text,rank () over (order by buffer_gets desc) as rank_buffers,to_char (100*ratio_to_report (buffer_gets) over (), '999.99') pct_bufgets from v$sql)
Where rank_buffers
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.