In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "analyzing the pgAdmin situation of PostgreSQL DBA". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Where is the memory allocated for work_mem?
Work_mem is the private memory of session (process), similar to Oracle's PGA, which is applied and managed by the process itself. If the work_mem limit is exceeded, PG will write the data to a temporary file. If there is enough memory in OS, the temporary file will be cached in os's page cache when writing, which means that the data is still in memory.
The influence of work_mem on sorting performance
Let's look at the effect of work_mem size on sorting performance.
Test table:
CREATE TABLE test (id serial PRIMARY KEY, random_text text);\! Perl-e'@ c = ("a".. "z", "A".. "Z", 0.9); print join ("", map {$c [rand @ c]} 10..20+rand (40)). "\ n" for 1.. 1000000' > / tmp/random_stringscopy test (random_text) FROM'/ tmp/random_strings';analyze test [local:/data/run/pg12]: 5120 pg12@testdb=#\ d test Table "public.test" Column | Type | Collation | Nullable | Default-+- -- id | integer | | not null | nextval ('test_id_seq'::regclass) random_text | text | Indexes: "test_pkey" PRIMARY KEY Btree (id) [local:/data/run/pg12]: 5120 pg12@testdb=# select count (*) from test Count-1000000 (1 row) [local:/data/run/pg12]: 5120 pg12@testdb=# select * from test limit 5 Id | random_text-+-1 | 82nXOCCqPYxsOCGf3sXHTi51hG720 2 | wsYU8uZhanrFoPwJneIvqJYcYDAnKrKVo 3 | mTD4bJr83asYTRCtgdn 4 | xqrw1QoGouIOa0vlxW9t 5 | VbWuf4p3jhrsAOoMKQrwrBBPZib7ZMAUA387EhSO1qsU (5 rows) [local:/data/run/pg12]: 5120 pg12@testdb=#
The test table has two columns, of which id is the primary key, random_text is a random string and 100w rows of data.
Work_mem is set to 1MB
[local:/data/run/pg12]: 5120 pg12@testdb=# set work_mem='1MB' SET [local:/data/run/pg12]: 5120 pg12@testdb=# [local:/data/run/pg12]: 5120 pg12@testdb=# EXPLAIN analyze SELECT * FROM test WHERE id Index Scan using test_pkey on test (cost=0.42..9.98 rows=89 width=35) (actual time=0.051..0.165 rows=100 loops=1) Index Cond: (id Hash Join (cost=443.47..14442.58 rows=3695 width=39) (actual time=22.259..182.177 rows=3333 loops=3) Hash Cond: (b.random_text = a.random_text)-> Parallel Seq Scan on test b (cost=0.00..12399.67 rows=416667 width=35) (actual time=0.021..61.790 rows=333333 loops=3)-> Hash (cost=332.62..332.62 rows=8868 width=35) (actual time=21.900..21.900 rows=9999 loops=3) Buckets: 16384 Batches: 1 Memory Usage: 801kB-> Index Scan using test_pkey on test a (cost=0.42..332.62 rows=8868 width=35) (actual time=0.069..12.185 rows=9999 loops=3) Index Cond: (id
< 10000) Planning Time: 0.786 ms Execution Time: 189.854 ms(12 rows)[local:/data/run/pg12]:5120 pg12@testdb=# set work_mem='100MB';SET[local:/data/run/pg12]:5120 pg12@testdb=# EXPLAIN analyze SELECT * FROM test a JOIN test b USING (random_text) WHERE a.id < 10000; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------- Gather (cost=1443.47..16329.38 rows=8868 width=39) (actual time=7.854..157.510 rows=9999 loops=1) Workers Planned: 2 Workers Launched: 2 ->Hash Join (cost=443.47..14442.58 rows=3695 width=39) (actual time=8.019..152.570 rows=3333 loops=3) Hash Cond: (b.random_text = a.random_text)-> Parallel Seq Scan on test b (cost=0.00..12399.67 rows=416667 width=35) (actual time=0.010..55.844 rows=333333 loops=3)-> Hash (cost=332.62..332.62 rows=8868 width=35) (actual time=7.869..7 .869 rows=9999 loops=3) Buckets: 16384 Batches: 1 Memory Usage: 801kB-> Index Scan using test_pkey on test a (cost=0.42..332.62 rows=8868 width=35) (actual time=0.031..4.434 rows=9999 loops=3) Index Cond: (id
< 10000) Planning Time: 0.410 ms Execution Time: 158.160 ms(12 rows)[local:/data/run/pg12]:5120 pg12@testdb=# set work_mem='1MB';SET[local:/data/run/pg12]:5120 pg12@testdb=# EXPLAIN analyze SELECT * FROM test a JOIN test b USING (random_text) WHERE a.id < 100000; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------- Gather (cost=4999.50..36132.51 rows=101516 width=39) (actual time=235.147..405.768 rows=99999 loops=1) Workers Planned: 2 Workers Launched: 2 ->Parallel Hash Join (cost=3999.50..24980.91 rows=42298 width=39) (actual time=222.076..293.543 rows=33333 loops=3) Hash Cond: (b.random_text = a.random_text)-> Parallel Seq Scan on test b (cost=0.00..12399.67 rows=416667 width=35) (actual time=0.013..52.181 rows=333333 loops=3)-> Parallel Hash (cost=3139.78..3139.78 rows=42298 width=35) (actual time=57.009. .57.010 rows=33333 loops=3) Buckets: 16384 Batches: 16 Memory Usage: 608kB-> Parallel Index Scan using test_pkey on test a (cost=0.42..3139.78 rows=42298 width=35) (actual time=0.139..29.482 rows=33333 loops=3) Index Cond: (id
< 100000) Planning Time: 1.389 ms Execution Time: 410.420 ms(12 rows)[local:/data/run/pg12]:5120 pg12@testdb=# set work_mem='100MB';SET[local:/data/run/pg12]:5120 pg12@testdb=# EXPLAIN analyze SELECT * FROM test a JOIN test b USING (random_text) WHERE a.id < 100000; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=5000.90..27999.06 rows=101516 width=39) (actual time=77.269..509.484 rows=99999 loops=1) Hash Cond: (b.random_text = a.random_text) ->Seq Scan on test b (cost=0.00..18233.00 rows=1000000 width=35) (actual time=0.014..129.504 rows=1000000 loops=1)-> Hash (cost=3731.95..3731.95 rows=101516 width=35) (actual time=77.152..77.152 rows=99999 loops=1) Buckets: 131072 Batches: 1 Memory Usage: 7760kB-> Index Scan using test_pkey on test a (cost=0.42..3731.95 rows=101516 width=35) (actual time=0.031..41.401 rows=99999 loops=1) Index Cond: (id
< 100000) Planning Time: 0.311 ms Execution Time: 513.957 ms(9 rows)[local:/data/run/pg12]:5120 pg12@testdb=# [local:/data/run/pg12]:5120 pg12@testdb=# set work_mem='1MB';SET[local:/data/run/pg12]:5120 pg12@testdb=# EXPLAIN analyze SELECT * FROM test a JOIN test b USING (random_text) WHERE a.id < 1000000; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=38546.00..96467.99 rows=999999 width=39) (actual time=483.527..1982.466 rows=999999 loops=1) Hash Cond: (a.random_text = b.random_text) ->Seq Scan on test a (cost=0.00..20733.00 rows=999999 width=35) (actual time=0.051..286.223 rows=999999 loops=1) Filter: (id
< 1000000) Rows Removed by Filter: 1 ->Hash (cost=18233.00..18233.00 rows=1000000 width=35) (actual time=482.952..482.952 rows=1000000 loops=1) Buckets: 16384 Batches: 128 Memory Usage: 644kB-> Seq Scan on test b (cost=0.00..18233.00 rows=1000000 width=35) (actual time=0.042..136.794 rows=1000000 loops=1) Planning Time: 1.413 ms Execution Time: 2023.608 ms (10 rows) [local:/data/run/pg12]: 5120 pg12@testdb=# set work_mem='100MB' SET [local:/data/run/pg12]: 5120 pg12@testdb=# EXPLAIN analyze SELECT * FROM test a JOIN test b USING (random_text) WHERE a.id
< 1000000; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=30733.00..65215.99 rows=999999 width=39) (actual time=495.932..1368.250 rows=999999 loops=1) Hash Cond: (a.random_text = b.random_text) ->Seq Scan on test a (cost=0.00..20733.00 rows=999999 width=35) (actual time=0.023..204.935 rows=999999 loops=1) Filter: (id
< 1000000) Rows Removed by Filter: 1 ->Hash (cost=18233.00..18233.00 rows=1000000 width=35) (actual time=495.148..495.149 rows=1000000 loops=1) Buckets: 1048576 Batches: 1 Memory Usage: 74114kB-> Seq Scan on test b (cost=0.00..18233.00 rows=1000000 width=35) (actual time=0.011..130.569 rows=1000000 loops=1) Planning Time: 0.295 ms Execution Time: 1417.372 ms (10 rows) [local:/data/run/pg12]: 5120 pg12@testdb=#
Judging from the log output, the number of Batches in 100MB is much less than that in 1MB, indicating that all data can be processed in memory (1 batch), and the execution time is about 20% less than that in 1MB.
Increasing work_mem will have a marginal decreasing effect. Unless there is enough memory, it is recommended to find the best value by test, and the effect of the number of connections on memory should be taken into account.
This is the end of "analyzing the pgAdmin situation of PostgreSQL DBA". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.