In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
How to improve the efficiency of MySQL Stmt pretreatment research, in view of this problem, this article introduces the corresponding analysis and solutions in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
In oracle database, there is a use of variable binding, which many people are familiar with, which can improve database efficiency and cope with high concurrency. Well, this does not include me. When my colleague asked me if there was a similar way to write it in MySQL, I was at a loss, so I looked it up on the Internet and found the following way:
View sourceprint?DELIMITER $$
Set @ stmt = select userid,username from myuser where userid between? And?
Prepare S1 from @ stmt
Set @ S1 = 2
Set @ S2 = 100
Execute S1 using @ S1 dint 2
Deallocate prepare s1
$
DELIMITER
For a query written in this form, you can replace parameters at will, and the person who gives the code calls it preprocessing. I think this is the variable binding in MySQL. However, in the course of looking up the data, I heard two kinds of voices. One is that there is a method similar to Oracle variable binding in MySQL, but it has no practical effect, that is, it is only easy to write, but can not improve efficiency. This statement can be seen in several 2009 posts:
Another view is that variable binding in MySQL can really improve efficiency, this is the hope, whether there is it or not, let's try it yourself.
The experiment is carried out on this machine, the amount of data is relatively small, the specific number does not have practical significance, however, can be used to explain some problems, the database version is mysql-5.1.57-win32 installation-free version.
In line with the attitude that ^ _ ^ is not very familiar with the database, there are many detours in the course of the experiment. this article is mainly based on the conclusion, so it does not list the design process of the experiment. The writing is not good, and the article is a bit boring. I hope that someone will come to beat the brick, because I have come to the conclusion that the efficiency of preprocessing is not as efficient as that of direct execution with or without cache. I don't want to accept the results of my experiment. If we say that the purpose of pretreatment is to standardize Query and improve the hit rate of cache, I personally feel that the talents are overqualified. I hope someone who knows better can point out what the facts look like-- NewSilen
Experimental preparation
The first file, NormalQuery.sql
NormalQuery
Set profiling=1;Select * From MyTable where DictID = 100601000004 select DictID from MyTable limit 1100 * select DictID from MyTable limit 2100 * omit repetition codes from limit 1100 to limit 100100 here * / .Select DictID from MyTable limit 100100 select query_id,seq,STATE,10000*DURATION FROM information_schema.profiling INTO OUTFILE d:/NormalResults.csv FIELDS TERMINATED BY, LINES TERMINATED BY
The second sql file StmtQuery.sql
StmtQuery
Set profiling=1;Select * From MyTable where DictID = 100601000004 using @ stmt = Select DictID from MyTable limit?,?; prepare S1 from @ stmt;set @ s = 100 position set @ S1 = 101 using @ S2 = 102 using. Set @ s100 = 200 and execute S1 using @ S1 using @ s2, select query_id,seq,STATE,10000*DURATION FROM information_schema.profiling INTO OUTFILE d:/StmtResults.csv FIELDS TERMINATED BY, LINES TERMINATED BY
Make a few notes:
1. After Set profiling=1; executes this statement, you can read out the details of the statement execution from the information_schema.profiling table, which actually contains a lot of content, including the time information I need. This is a temporary table, and each new session needs to reset the profiling property to read data from this table.
2. Select * From MyTable where DictID = 100601000004
This line of code seems to have nothing to do with our experiment, and I originally thought so. The reason for adding this sentence is that in my previous exploration, I found that one step in the process of execution is open table. If it is the first time to open a table, it will take quite a long time, so before executing the following statement, I executed this line of code to open the experimental table.
3. By default, MySQL saves 15 query histories in the information_schema.profiling table, which can be adjusted by modifying the profiling_history_size attribute. I hope he is larger so that I can pull out enough data at a time, but the maximum value is only 100. although I adjusted it to 150, I was able to find only 100s at last, but that's enough.
4. I did not list all the SQL code, because the query statement is similar, the above code is indicated by ellipsis, the final result is two csv files, personal habits, you can also save the results to the database for analysis
Experimental procedure
Restart the database, execute the file NormalQuery.sql, execute the file StmtQuery.sql, and get two result files
Restart the database, execute StmtQuery.sql, execute the file NormalQuery.sql, and get the other two result files
Analysis of experimental results
A hundred query statements are executed in each SQL file. There are no duplicate query statements and there is no query cache. The average time of executing SQL is calculated as follows.
As you can see from the results, whether executed first or later, the statements in NormalQuery are faster than those using preprocessed statements. =!
Let's take a look at the details of each query. The query of Normal and Stmt has been executed 200 times each. The details of each step are as follows:
As can be seen from this, first, normalquery has one less step than stmtquery, and second, although stmt is superior to normal in many steps, it loses too much in one step of executing and loses in the end.
Finally, the experimental results of a query cache are given, and the specific steps are not listed.
When querying the cache, Normal wins.
This is the answer to the research question on how to carry out MySQL Stmt preprocessing to improve efficiency. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.