In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly introduces "how to use database connection pool correctly". In daily operation, I believe many people have doubts about how to use database connection pool correctly. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "how to use database connection pool correctly". Next, please follow the editor to study!
The configuration of database connection pooling is where developers often make holes. When configuring database connection pooling, several principles that can be said to be counterintuitive need to be clarified.
10,000 concurrent user access
Imagine you have a website where the pressure is not as high as Facebook, but there are about 10, 000 concurrent visits-- that is, about 20, 000 TPS. So how big should the database connection pool for this site be set? The result may surprise you, because the correct way to ask this question is:
"how small should the database connection pool for this site be set?"
In the video, the Oracle database is stress tested, and 9600 concurrent threads perform database operations. The sleep 550ms between every two operations to access the database starts with a middleware thread pool size of 2048:
Initial configuration
After the stress test runs, it looks like this:
2048 performance data when connecting
Each request needs to wait for 33ms in the connection pool queue. 77ms is required to execute SQL after obtaining the connection.
At this point, the database wait event looks like this bear:
All kinds of buffer busy waits
All kinds of buffer busy waits, database CPU is about 95% (CPU is not intercepted in this picture)
Next, reduce the middleware connection pool to 1024 (concurrency remains the same), and the performance data looks like this:
After the connection pool has dropped to 1024
The waiting time to get the link has not changed much, but the time it takes to execute SQL has been reduced.
In the chart below, the top half is wait, and the bottom half is throughput.
Wait and Throughput
As you can see, after the middleware connection pool was halved from 2048, the throughput remained the same, but the wait event was halved.
Next, reduce the database connection pool to 96, and the number of concurrent threads remains unchanged at 9600.
Performance data for 96 connections
The queue waits for 1ms on average, and it takes an average of 2ms to execute SQL.
Wait events are almost gone, and throughput is up.
Nothing else is adjusted, just reducing the database connection pool in the middleware layer, reducing the request response time from about 100ms to 3ms.
But why?
Why does nginx perform so much better than an Apache HTTPD of 100 processes with just 4 threads? Looking back on the basics of computer science, the answer is actually obvious.
Even single-core CPU computers can run hundreds of threads at the same time. But we all know that this is just a little trick that the operating system uses time slicing to play. A CPU core can only execute one thread at a time, then the operating system changes the context, the core starts executing the code of another thread, and so on. Given a CPU core, the sequential execution of An and B is always faster than the simultaneous execution of An and B through time slicing, which is a basic law of computer science. Once the number of threads exceeds the number of CPU cores, increasing the number of threads will only be slower, not faster.
This is almost the truth.
Limited resources
The above statement can only be said to be close to the truth, but it is not that simple. There are some other factors that need to be added. When we look for performance bottlenecks in databases, we can always classify them into three categories: CPU, disk, and network. There is nothing wrong with adding memory, but the bandwidth of memory is several orders of magnitude higher than that of disks and networks, so let's not add it.
If we ignore disks and networks, the conclusion is very simple. On an 8-core server, setting the number of connections / threads to 8 can provide optimal performance, and increasing the number of connections will lead to performance degradation due to the loss of context switching.
Databases usually store data on disks, which are usually made up of rotating metal discs and a read-write head mounted on a stepper motor. The read / write head can only appear in one place at a time, and then it must "address" to another location to perform another read and write operation. So there is time-consuming addressing, as well as cycle time, and the read-write head needs to wait for the target data on the disc to "rotate into place" before it can operate. Using caching can certainly improve performance, but the above principle still holds true.
During this time period, the thread is "blocking" waiting for the disk, and the operating system can use that idle CPU core to serve other threads. So, since threads are always blocking on Istroke O, we can have more threads / connections than the CPU core so that we can get more work done at the same time.
How much more should it be? It depends on the disk. The newer SSD does not require addressing and does not have a rotating disc. Don't take it for granted that "SSD is faster, so we should increase the number of threads." on the contrary, no addressing and no cycle time means less blocking, so fewer threads [closer to the number of CPU cores] will perform better. Only when blocking creates more execution opportunities can more threads perform better.
Networks are similar to disks. Blocking will also occur when reading and writing data through the Ethernet interface. 10G bandwidth will be less blocking than 1G bandwidth, and 1G bandwidth will be less blocking than 100m bandwidth. However, networks usually put the third priority, and some people will ignore them in their performance calculations.
The figure above is the benchmark data for PostgreSQL, and you can see that the TPS growth rate slows down from 50 connections. In the Oracle video above, they reduced the number of connections from 2048 to 96, which is actually too high unless the server has 16 or 32 cores.
Calculation formula
The following formula is provided by PostgreSQL, but we think it can be widely applied to most database products. You should simulate the expected traffic and test your application from this formula to find the most appropriate connection value.
Number of connections = ((core * 2) + number of effective disks)
The number of cores should not include hyperthreading (hyperthread), even if hyperthreading is turned on. If all the active data is cached, then the number of effective disks is 0, and as the cache hit ratio decreases, the number of effective disks gradually approaches the actual number of disks. The effect of this formula on SSD has not been analyzed.
According to this formula, the connection pool size of your 4-core i7 database server should be ((4 * 2) + 1) = 9. Take a whole and call it 10. Do you think it's too small? Run a performance test, and we make sure that it can easily handle the scenario where 3000 users execute simple queries concurrently at the rate of 6000TPS. If the connection pool size exceeds 10, you will see that the response time starts to increase and TPS begins to decline.
The author's note:
In fact, this formula is not only applicable to the calculation of database connection pool, most of which involve calculation and Imax O programs, the setting of the number of threads can refer to this formula. When I stress-tested a messaging service written in Netty, I found that the optimal number of threads was exactly twice the number of CPU cores.
Axiom: you need a small connection pool and a queue full of threads waiting to connect
If you have 10000 concurrent users, setting a connection pool of 10000 is basically irrational. 1000 is still scary. Even 100 is too much. You need a small connection pool of about 10 connections, and then leave the remaining business threads waiting in the queue. The number of connections in the connection pool should be equal to the number of query tasks your database can effectively perform at the same time (usually not higher than the number of 2*CPU cores).
We often see small-scale web applications that deal with about a dozen concurrent users but use a connection pool of 100 connections. This will put an extremely unnecessary burden on your database.
Attention please
The size of the connection pool is ultimately related to the system characteristics.
For example, a system with a mixture of long transactions and short transactions is usually difficult for any connection pool to tune. The best way is to create two connection pools, one for long transactions and one for short transactions.
For example, a system executes a task queue, which only allows a certain number of tasks to be executed at the same time, and the number of concurrent tasks should be adapted to the number of connection pool connections, not the other way around.
At this point, the study on "how to use database connection pool correctly" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.