In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the knowledge of "how to configure high-performance sql.DB in Go". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Open and idle connections
A sql.DB object is a database connection pool that contains "active" and "idle" connections. A connection in use means that you are using it to perform database tasks, such as executing SQL statements or row queries. When the task is complete, the connection is free.
When you create a sql.DB to perform database tasks, it first checks to see if there are free connections available in the connection pool. If a connection is available, Go reuses the existing connection and marks it as in use during the task. If there is no free connection in the pool and you need a free connection, Go will create a new connection.
SetMaxOpenConns method
By default, there is no limit to the number of connections that can be opened at the same time (including in use + idle). However, you can implement custom restrictions through the SetMaxOpenConns () method, as shown below:
/ / initialize a new connection pool db, err: = sql.Open ("postgres", "postgres://user:pass@localhost/db") if err! = nil {log.Fatal (err)} / / set the current maximum number of open connections (including idle and in use) to 5. / / if set to 0, there is no limit on the number of connections, and there is no limit by default. Db.SetMaxOpenConns (5)
In this sample code, the connection pool now has five simultaneous open connections. If all five connections are already marked as in use and another new connection is required, the application will be forced to wait until one of the five connections is released and becomes idle.
To illustrate the impact of changing MaxOpenConns, I ran a benchmark that set the maximum number of open connections to 1, 2, 5, 10, and unlimited. The benchmark executes parallel INSERT statements on the PostgreSQL database, where you can find the code. Test results:
BenchmarkMaxOpenConns1-8500 3129633 ns/op 478 B/op 10 allocs/op
BenchmarkMaxOpenConns2-8 1000 2181641 ns/op 470 B/op 10 allocs/op
BenchmarkMaxOpenConns5-8 2000 859654 ns/op 493 B/op 10 allocs/op
BenchmarkMaxOpenConns10-8 2000 545394 ns/op 510 B/op 10 allocs/op
BenchmarkMaxOpenConnsUnlimited-8 2000 531030 ns/op 479 B/op 9 allocs/op
PASS
For this benchmark, we can see that the more connections are allowed to open, the less time it takes to perform INSERT operations on the database (the number of open connections is 1, the execution speed is 3129633ns/op, while unlimited connections: 531030nsAccord opMel-about six times faster). This is because the more connections you are allowed to open, the more database queries you can execute concurrently.
SetMaxIdleConns method
By default, sql.DB allows up to 2 idle connections to be reserved in the connection pool. You can change it through the SetMaxIdleConns () method, as follows:
/ / initialize a new connection pool db, err: = sql.Open ("postgres", "postgres://user:pass@localhost/db") if err! = nil {log.Fatal (err)} / / set the maximum number of idle connections to 5. Setting this value to less than or equal to 0 means that no idle connections are reserved. Db.SetMaxIdleConns (5)
In theory, allowing more free connections in the pool will improve performance because it is less likely to establish new connections from scratch-- thus helping to improve database performance.
Let's take a look at the same benchmark, where the maximum idle connection is set to none, 1mem2, 5 and 10:
BenchmarkMaxIdleConnsNone-8 300 4567245 ns/op 58174 B/op 625 allocs/op
BenchmarkMaxIdleConns1-8 2000 568765 ns/op 2596 B/op 32 allocs/op
BenchmarkMaxIdleConns2-8 2000 529359 ns/op 596 B/op 11 allocs/op
BenchmarkMaxIdleConns5-8 2000 506207 ns/op 451 B/op 9 allocs/op
BenchmarkMaxIdleConns10-8 2000 501639 ns/op 450 B/op 9 allocs/op
PASS
When MaxIdleConns is set to none, a new connection must be created from scratch for each INSERT, and we can see from the benchmark that the average runtime and memory usage are relatively high.
Allowing only one idle connection to be retained and reused has a particularly significant impact on the benchmark-it reduces average elapsed time by about 8 times and memory usage by about 20 times. Continuing to increase the size of the free connection pool will improve performance, although the improvement is not significant.
So, should you maintain a large pool of free connections? The answer depends on the application. It is important to realize that maintaining idle connections comes at a cost-it takes up memory that can be used for applications and databases.
Another possibility is that if a connection is idle for too long, it may become unavailable. For example, the wait_timeout setting for MySQL automatically closes any unused connections within 8 hours (the default).
When this happens, sql.DB handles it gracefully. The bad connection is automatically retried twice and then abandoned, and Go removes the connection from the connection pool and creates a new one. Therefore, setting the MaxIdleConns too large may cause connections to become unavailable and take up more resources than a smaller pool of idle connections (fewer connections are used more frequently). So, if you are likely to use it again soon, you just need to maintain a free connection.
Finally, it is pointed out that MaxIdleConns should always be less than or equal to MaxOpenConns.
. Go enforces this operation and automatically reduces MaxIdleConns if necessary.
SetConnMaxLifetime method
Now let's look at the SetConnMaxLifetime () method, which sets the maximum length of time that the connection can be reused. This is useful if your SQL database also has a maximum connection life cycle, or-for example-you want to easily swap the database after the load balancer.
You can use it like this:
Initialize a new connection pool db, err: = sql.Open ("postgres", "postgres://user:pass@localhost/db") if err! = nil {log.Fatal (err)} / / set the maximum lifetime of the connection to 1 hour. Setting it to 0 means that there is no maximum lifetime and the connection will always be reusable (this is the default behavior) db.SetConnMaxLifetime (time.Hour)
In this example, all connections will "expire" 1 hour after creation and cannot be reused after expiration. But note:
This does not guarantee that the connection will remain in the pool for a full hour; most likely, for some reason, the connection becomes unavailable and closes automatically before that.
A connection can still be used more than an hour after creation-it just cannot be reused after that time.
This is not an idle timeout. The connection will expire 1 hour after the first creation-not 1 hour after the last idle.
Automatically runs a cleanup operation every second to remove "expired" connections from the connection pool.
In theory, the shorter the ConnMaxLifetime, the more often the connection expires-- so the more often you need to create a connection from scratch. To illustrate this point, I ran a benchmark with ConnMaxLifetime set to 100ms, 200ms, 500ms, 1000ms, and infinite (always reused), with the default setting of unlimited open connections and 2 idle connections.
These time periods are obviously much shorter than the time you use in most applications, but they help to illustrate the behavior well.
BenchmarkConnMaxLifetime100-8 2000 637902 ns/op 2770 B/op 34 allocs/op
BenchmarkConnMaxLifetime200-8 2000 576053 ns/op 1612 B/op 21 allocs/op
BenchmarkConnMaxLifetime500-8 2000 558297 ns/op 913 B/op 14 allocs/op
BenchmarkConnMaxLifetime1000-8 2000 543601 ns/op 740 B/op 12 allocs/op
BenchmarkConnMaxLifetimeUnlimited-8 3000 532789 ns/op 412 B/op 9 allocs/op
PASS
In these specific benchmarks, we can see that the memory usage during the 100ms lifetime has more than tripled compared to the infinite lifetime, and the average runtime of each INSERT is also slightly longer.
If you set ConnMaxLifetime in your code, be sure to keep in mind how often the connection will expire (and then recreate). For example, if you have a total of 100 connections and the ConnMaxLifetime is 1 minute, your application may kill and recreate 1.67 connections per second (average). You don't want this frequency to be too high, which will eventually hinder performance rather than improve it.
The number of connections exceeds
Finally, this article is incomplete without explaining what will happen if you exceed the hard limit on the number of database connections. To illustrate this, I will modify the postgresql.conf file so that a total of 5 connections are allowed (the default is 100).
Max_connections = 5
Then rerun the benchmark with unlimited connections.
BenchmarkMaxOpenConnsUnlimited-8-FAIL: BenchmarkMaxOpenConnsUnlimited-8
Main_test.go: 14: pq: sorry, too many clients already
Main_test.go: 14: pq: sorry, too many clients already
Main_test.go: 14: pq: sorry, too many clients already
FAIL
As soon as the hard limit of five connections is reached, the database driver (pq) immediately returns an error message about too many client connections and cannot complete the INSERT. To prevent this error, we need to set the maximum number of open connections (in use + idle) in sql.DB to less than 5. Like this:
/ / initialize a new connection pool db, err: = sql.Open ("postgres", "postgres://user:pass@localhost/db") if err! = nil {log.Fatal (err)} / / sets the number of open connections (connections in use + idle connections) to a maximum total of 3. Db.SetMaxOpenConns (3)
Now, sql.DB can only create a maximum of three connections at any time, and there should be no errors when the benchmark runs. However, it is important to note that when the open connection limit is reached and all connections are in use, any new database tasks that the application needs to perform will be forced to wait until the connection is marked as idle. For example, in the context of a web application, a user's HTTP request appears to "hang" and may even time out while waiting for the database task to run.
To mitigate this situation, you should always pass it in the same context. When invoking the database, context-enabled methods such as ExecContext () use fixed, fast timeout context objects.
Summary
1. As a rule of thumb, the MaxOpenConns value should be set explicitly. This should be less than the database and infrastructure's rigid limit on the number of connections.
2. In general, higher MaxOpenConns and MaxIdleConns values will lead to better performance. But you should note that the effect is diminishing and that too many free connections in the connection pool (connections are not reused and eventually go bad) can actually lead to performance degradation.
3. To reduce the risk posed by point 2 above, you may need to set up a relatively short ConnMaxLifetime. But you don't want it to be too short, causing the connection to be killed or rebuilt unnecessarily frequently.
4. MaxIdleConns should always be less than or equal to MaxOpenConns.
For small and medium-sized web applications, I usually use the following settings as a starting point and then optimize based on the load test results of the actual throughput level.
Db.SetMaxOpenConns (25) db.SetMaxIdleConns (25) db.SetConnMaxLifetime (5*time.Minute) "how to configure High performance sql.DB in Go" ends here. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.