Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Performance Analysis of mysql Server

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the performance analysis of mysql server, the article is very detailed, has a certain reference value, interested friends must read it!

3.3.3 performance analysis: limited 3.4 diagnosis of intermittent problems

If the system occasionally pauses, slow query, call problems, try not to use trial and error to solve the problem: high risk

3.4.1 single query question or service question using SHOW GLOBAL STATUS

Higher frequency: 1s/ executes the command to get data, problems occur through the counter's

Use SHOW PROCESSLIST [reference] to show which threads are running

Use the query log

Enable slow query, set the global long_query_time=0, and confirm that the all connection adopts the new settings (you may need to reset the all connection to take effect)

Pay attention to the log during the period when the throughput suddenly drops, and the query is written to the slow query log only during the completion phase.

Good tools do more with half the effort: tcpdump, pt-query-digest, Percona Server

Understand the problems found

Visual data: gnuplot / R (drawing tool)

Gnuplot:

Install some commands: getting started with Common skills tutorial 2 Gnuplot data Visualization

Suggestion: first, use the first two methods, which have low overhead and can collect data interactively through simple shell scripts or repeated queries.

3.4.2 laying out diagnostic data

Now that there are intermittent problems, collect as much data as possible (not just when the problem occurs)

Figure it out: 1, there are ways to distinguish when something went wrong: triggers; 2, tools to collect diagnostic data

Diagnostic trigger

Error: collect a lot of diagnostic data during the period when there is no problem, which is a waste of time (this does not contradict the previous, read it carefully)

Missed detection: no data was found when the problem occurred, missed the opportunity, and confirmed that the trigger could really identify the problem before starting the collection.

Good triggers:

Find some indicators that can be compared with the normal threshold.

Choose an appropriate threshold: high enough (not triggered normally), not too high (not bad when problems occur)

Recommended tool pt-stalk [reference] [2] trigger to set the frequency of threshold checks for variables to be monitored in a conditional record configuration

What kind of data is collected?

Execution time: time to work and time to wait

Collect the data that all can collect within the required period of time

Unknown causes of the problem: 1. The server needs to do a lot of work, resulting in a large consumption of CPU;2, waiting for the release of resources.

Different methods are used to collect diagnostic data to identify the cause:

1. Analyze the report: confirm whether there is too much work. Tool: tcpdump listens for TCP traffic mode opening and closing slow query log.

2. Wait analysis: confirm whether there is a large number of waits, GDB stack trace information, show processlist, show innodb status to observe thread, transaction status

Interpret the resulting data

Objective: 1. Whether the problem has really happened; 2. Whether there is an obvious leaping change.

Tools:

Oprofile uses the performance counter (performance counter) provided at the hardware level of cpu to help us find out the "culprit" of cpu from the process, function, and code level. Example [reference]

Opreport command, a way to view cpu usage at the process and function levels, respectively

Samples |% |-the number of samples that occurred in the image as a percentage of the total number of samples. Mirror name.

The opannotate command displays statistics on code-level occupancy of cpu

In GDB:Linux application development, the most commonly used debugger is gdb (the object of debugging is an executable file), which can set breakpoints in the program, view variable values, track the execution process of the program step by step (data, source code), view memory, stack information. Using these functions of the debugger, we can easily find out the non-grammatical errors in the program. [reference] syntax and examples

3.4.3 one diagnostic case

Intermittent performance problems, knowledge of MySQL, innodb, GNU/Linux

Clear: 1, what is the problem, a clear description; 2, what has been done to solve the problem?

Start: 1, understand the behavior of the server; 2, comb the status parameters of the server to configure the software and hardware environment (pt-summary pt-mysql-summary)

Don't be distracted by various situations that stray too much from the topic. Write questions on a note, check one by one and cross out the other.

Is it the cause or the result?

Possible reasons why resources have become inefficient:

(1) overuse of resources and insufficient balance; (2) resources have not been matched correctly; (3) resources are damaged or malfunctioning

3.5 other profiling tools

USER_STATISTICS: some tables measure and audit database activity

Strace: investigate the situation of system calls, using actual time, unpredictability, overhead, oprofile usage costs CPU cycle

Summary:

The most effective way to define performance is response time

If it cannot be measured, it cannot be effectively optimized, and the performance optimization work needs to be based on high-quality, omni-directional and complete response time measurement.

The best starting point for measurement is the application. Even if the problem lies in the underlying database, it is easier to find the problem with good measurement.

Most systems cannot measure completely, and sometimes there are wrong results. Try to get around some limitations and be aware of the shortcomings and uncertainty of the method.

Complete measurements will produce a lot of data that needs to be analyzed, and so needs to use a profiler (the best tool).

Analysis report: summary of information, cover up and discard a lot of details, will not tell you what is missing, can not be completely relied on

There are two time-consuming operations: work or wait. The almost profiler can only measure the time consumed by work. So waiting for sharing is sometimes a useful supplement, especially when the cpu utilization is low but the work cannot be completed.

Optimization and promotion are two different things. when the rising costs exceed the benefits, optimization should be stopped.

Pay attention to your directness, thinking, and decision-making as much as possible based on data

In a words: first of all, clarify the problem, choose the right technology, make good use of tools, be careful enough, be logical and stick to it, don't confuse the cause with the result, and don't make any changes to the system before identifying the problem.

The above is all the content of this article "performance Analysis of mysql Server". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report