In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
Oracle database optimization ideas and 9 typical problems are what, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.
Before the system goes online, the technical review of the infrastructure may need to be accompanied by a series of rectification and optimization, of which the database level rectification and optimization is the most important one.
1. At installation time (document ID 1525820.1)
In fact, Oracle officially provides very detailed general best practices for Oracle, and there will be a set of best practices for different platforms, versions, uses, and so on.
For example:
1) RAC and Oracle Clusterware best practices and beginner's guide (platform independent section)
Document 810394.1
RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent)
2) detailed best practices for a specific platform
Document 811306.1
RAC and Oracle Clusterware Best Practices and Starter Kit (Linux)
3) operating system configuration considerations
4) Virtualization considerations
5) Storage considerations
6) Network considerations
7) specific hardware considerations
two。 Before testing and putting the system online
In this process, according to the specific application and test results and our different understanding of the database, we may produce some industry experience and industry practice based on the industry background.
Typical questions:
1) what should be done to optimize the configuration of redo logs? What should be done?
First of all, people who have come into contact with the database are no stranger to this concept. When the database makes SQL updates, the transaction execution process must first be recorded in the redo log, and then the log will be brushed to disk to persist the data update. The standard log of a successful data submission falls to disk rather than the actual data. Therefore, the configuration (size and number) of logs directly determines the read and write performance of the database. If the log size is very large, it will result in a very long archive switching time. Once an unrecoverable DB disaster occurs at this time, then the amount of data lost or RPO recovered through backup will be large. If the log size is very small, it is bound to cause frequent log switching, and there are a large number of log switching events in AWR, which will have a great impact on the performance of the database. Therefore, according to the waiting events of log switching in the AWR report of the performance test, and the switching frequency to determine whether the amount of data and size need to be adjusted. General OLTP recommendations (10 groups, 500m).
Next, we need to consider the parameter settings associated with it.
For example, "_ use_adaptive_log_file_sync" directly determines the way the log is discarded. For 11g, a new way to add data from the log buffer is polling, and the traditional way is post/wait. The oracle underlying layer automatically determines when and how to complete the write task of the lgwr process. For the post/wait approach, after the client has done the commit, it needs to wait for the event to complete. As soon as the oracle is completed, the user process is notified and the user process is immediately aware of it. But this notification post will consume a lot of CPU resources. Polling is an oracle foreground process startup check task, which automatically checks background lgwr writes, which costs less CPU resources, but the user process may not be aware of it immediately. So the two methods have their own advantages. But the key is that the implementation of the two switching methods in the background will consume system performance, especially in busy times, frequent switching will lead to database performance degradation. There are a large number of 'Log file sync'' in awr. Bug 13707904 .
For example, "archive_lag_target" determines whether we enable the log forced switching feature. In order to reduce data loss during a failure, you can set the ARCHIVE_LAG_TARGET parameter to force log switching. The default value for this parameter is 0, which means that the parameter is not enabled. It is recommended to set the value to 1800.
2) what points should be concerned about the memory management of ORACLE? How should it be configured?
First of all, there are two general memory management methods for ORACLE: AMM&ASMM. Starting from Oracle 11g, ORACLE uses AMM (automatic memory Management) by default, that is, it allows the database to fully manage the size of SGA and PGA, while for the administrator only needs to set a total size (memory_target), the database will dynamically adjust the size of SGA, PGA and the various components contained in it, such as Database buffer cache, Shared pool and so on. This feature is designed to avoid performance problems caused by uneven memory usage caused by incorrect SGA and PGA settings. But in the process of practical application, is this feature necessarily excellent? In AMM, when the database is started, there is a fixed proportion to allocate the SGA/PGA size: sga_target = memory_target * 60%
Pga_aggregate_target=memory_target * 40%.
However, when the concurrency is high and the database is very busy, the speed of automatic memory adjustment may not be as fast as the memory requests of a large number of sessions. In addition, when the demand for PGA increases sharply with the increasing number of sessions, it will first preempt SGA, resulting in database performance failure. AMM is not recommended in highly concurrent database scenarios. Adopt 10g more mature automatic shared memory management (ASMM) and automatic PGA management. Adjust memory parameters manually. For more information, please see the following:
/ / turn off automatic memory management
Memory_target=0
Memory_max_target=0
/ / set SGA to a fixed value, according to the recommendations in the AWR report in the performance test
Sga_max_size=XG
Sga_target=XG
/ / set parameters such as PGA
Pga_aggregate_target=XG
Large_pool_size=256M
Another important parameter, "_ shared_pool_reserved_pct", if this parameter is set too small, may result in ORA04031,TROUBLESHOOTING ORA-4031-Simple Guide and Basic Approach to Solve the issue (document ID 1416817.1)
3) about the configuration of large pages under the Linux system?
Implementing HugePage in a Linux environment can greatly improve kernel performance. For systems with large memory, the effect is especially obvious. In general, the larger the RAM in the system, the greater the benefits the system will gain from enabling Hugepage. This is because the amount of work the kernel has to do to map and maintain memory page tables increases as the system memory increases. Enabling Hugepage can significantly reduce the number of pages to be managed by the kernel and improve the efficiency of the system. Experience has shown that if Hugepage is not enabled, it is common for the kernel to crowd out critical Oracle Clusterware or Real Application Clusters daemons, which can lead to instance or node evictions. For specific configuration methods, please refer to: HugePages on Linux: What It Is... And What It Is Not... (document ID 361323.1)
4) Parameter optimization related to SQL parsing?
First of all, each SQL language in Oracle needs to be parsed before execution, which is divided into soft parsing and hard parsing. There are two types of SQL statements in Oracle, one is DDL statements (data definition language), which are never shared, that is, hard parsing is required for each execution. Another category is DML statements (data manipulation language), which choose either hard parsing or soft parsing according to the situation.
Generally speaking, we want less hard parsing and more soft parsing in our AWR report. Because hard parsing can be very expensive. In order to reduce the parsing time of sql with bound variables, oracle 9i introduces the function of binding variable snooping. That is, the same cursor is used when the variables of the same SQL are assigned to different values, which saves the parsing time of the sql. Have you actually observed the actual situation of the number of soft and hard parsing in AWR by turning the function on or off? In fact, binding variable snooping and later adaptive cursors are some new features enabled by oracle in order to find the optimal execution plan, but in the process of practical application, there have been a lot of bug for business scenarios with different orders of magnitude and different characteristics.
Understanding and Diagnosing ORA-00600 [12333] / ORA-3137 [12333] Errors (ID 389713.1)
According to the characteristics of their own business system, do a large number of performance tests and business tests, according to the parameters of the off and open to compare the software and hardware resolution ratio shown in the awr report and the execution plan data to determine whether to open or related to the corresponding functional features. The parameters are as follows:
"_ optim_peek_user_binds"
"_ optimizer_adaptive_cursor_sharing"
"_ optimizer_extended_cursor_sharing"
"_ optimizer_extended_cursor_sharing_rel"
"_ optimizer_use_feedback"
Then, several related parameters: open_cursors and session_cached_cursors determine that the application session can control the number of cursors that can be opened and cached. If the number is insufficient, it will cause performance problems in SQL parsing. These two parameters should be adjusted according to the values in the v$resource_limit view to avoid performance problems caused by unreasonable resource settings.
Also, several parameters related to the execution of the parsing execution plan, _ b_tree_bitmap_plans, and sometimes BITMAP conversion of B-Tree indexes for SQL execution, tend to generate extremely bad execution plans, resulting in CPU100%.
Select Fails With ORA-600 [20022] (document ID 1202646.1)
It is suggested that it be turned off.
5) how to avoid fierce competition among database cluster nodes?
There are many competitions between database nodes, including the competition of locks (various granularity locks) and data transmission. Avoiding competition completely loses the meaning of RAC. RAC itself wants to be able to execute tasks in parallel on two nodes.
If parallelism is particularly extreme, it must cause serious performance problems, and if it is completely banned, it will not be possible and lose the original meaning of the cluster. So we can only balance to a certain extent:
First of all, with regard to the DRM characteristics of DRM,oracle, in theory, it is to avoid the transmission of data between nodes and the frequent occurrence of lock waiting events between nodes. The ultimate goal of DRM is to unify the request node and the Master node. But in practice, this feature causes a lot of BUG, but leads to performance failure in the competition between nodes. Bug 6018125-Instance crash during dynamic remastering or instance reconfiguration (Doc ID 6018125.8). So it is recommended that it be closed.
Then, with regard to the parameter "parallel_force_local", ORACLE RAC costs a lot of money to achieve multi-node parallel processing. Suppose there are three nodes in a cluster. For reading and writing a data block, there is a Master, a requester and an owner. The requester requests the latest version of the data block from Master, and the Master forwards the request to the owner, and the owner transmits the data block to the applicant according to the request information. Then add a lock to read and write. This process requires a large number of data transmission and competition, once this thing becomes the majority, then it is bound to cause excessive communication load between nodes, resulting in a large number of lock waiting time, seriously affecting the overall performance of the database. Especially in situations where cross-data centers are highly available. Therefore, we only need to achieve business-level concurrency processing, rather than pursue an SQL-level absolute concurrency. This is the reason why things go to extremes. So turn on the parameters so that process-level concurrency can be localized rather than cross-node processing. This is one of the parameters that must be optimized in the official document ID 1536272.1.
6) automatic tasks about the database?
The Oracle 11g database has three predefined automatic maintenance tasks:
Automatic Optimizer Statistics Collection (automatic optimizer statistics collection):
Collect Optimizer (optimizer) statistics for all Schema objects in the database that have no statistics or only outdated statistics. QL query optimizer (SQL query optimizer) uses the statistics collected by this task to improve the performance of SQL execution.
Automatic Segment Advisor (automatic Segment Guide):
Identify segments that have available recycling space and make recommendations on how to eliminate debris in these segments. You can also manually run Segment Advisor to get more up-to-date suggestions, or get suggestions for segments that are not checked by Automatic Segment Advisor that are likely to do space recycling.
Automatic SQL Tuning Advisor (automatic SQL Optimization Guide): check the performance of high-load SQL statements and make recommendations on how to optimize them. You can configure this guide to automatically apply the recommended SQL profile.
With regard to statistics collection, the database has its own default startup time, 11g is between 22:00 and 2:00, assuming this time conflicts with our batch time, we can modify the specific execution time of the machine. But this task must be retained.
As for the other two optimization guidance, it depends on whether the probability of our actual work is very high and whether it is valuable to provide us with some optimization theoretical guidance. In general, it doesn't make much sense if you don't use it well, you might as well not use it.
7) how many configuration optimizations about security?
First of all, does the database want to keep the audit? How to keep it. If we do not open it, we will not be able to find clues for security problems in the future; if we open it, it is likely that audit logs take up a lot of storage space, or even affect the performance of database IO. In general, we still need to audit some basic login behavior, but we can modify the log location to the operating system level to reduce the performance pressure on the database layer, and we should dump it on a regular basis. to reduce the extreme case of too many broken files and running out of I nodes in the file system. This optimization can be achieved by adjusting the parameter "AUDIT_TRAIL" and the adump parameter.
Next, the control parameters of the alert log and the trace file.
"MAX_DUMP_FILE_SIZE", which determines the size limit of these files, is unlimited by default. If a very large file is generated, it will meet the file limit requirements of OS, resulting in write failure.
Finally, all of this can be reassigned to OS or rely on OS log files or audit files. Be sure to pay attention to its grasp of the use of OS I-node resources, so that df-h is normal but df-I is not normal. This is often a point that is easy to overlook. It is necessary to have a clear grasp of both the monitoring and the limitation of the user's resource parameters by OS.
8) what are the concerns about ADG?
ADG itself as a means of disaster recovery, then it will have a lot of points that we need to monitor. For example, the status of the master and standby database, the switching status of logs, whether there is GAP between the data, and so on. But what I want to say is that we are very easy to overlook.
First of all, about the RMAN parameter setting of the standby library.
RMAN > CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
This parameter setting is to protect logs that have not been applied from being deleted, which is actually no longer needed in the higher version of 11g, but the lower version needs to be noted. For details, please refer to the document ID 1577382.1.
9) other points that should be paid attention to when managing the database?
For example:
Does the data file of the tablespace extend automatically?
Are all the data files in the tablespace in the ASM way?
Is the redundancy of ASM consistent?
Whether the user's default password policy has been lifted for 180 days, and so on.
Are the monitoring metrics of the database covered (clustering, services, snooping, ASM, tablespaces, performance, etc.)?
Is monitoring at the OS level enabled? Especially the communication between private networks, CPU, memory monitoring and so on? Is it Nmon or osw, do their logs cycle regularly or keep growing, and so on?
Is the system of database inspection perfect? Is the content of daily inspection and monthly inspection carefully designed? Has automation been realized and so on? It is strongly recommended that daily inspection work be automated, tasks be executed regularly, logs are uniformly integrated into the shared file system, and if possible, they can be integrated into the database and transferred in and out as needed according to their own inspection mechanism and system.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.