In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Error description
(1) the original SQL executed by Postgres:
Select COALESCE (m1.place_id, m2.place_id, m3.place_id) as place_id, concat_ws ('``, m1.rich_attributes, m2.rich_attributes, m3.rich_attributes) as rich_attributes from (SELECT place_external_points.place_id, string_agg (place_external_points.metadata_dictionary_id,'@-@', place_external_points.value) ```:: text) AS rich_attributesFROM place_external_points b WHERE metadata_dictionary_id = ANY (ARRAY [61,62]) group by place_external_points.place_id) m1full join (SELECT place_geocoded.place_id, string_agg (concat (place_geocoded.metadata_dictionary_id,'@-@', place_geocoded.value),'``:: text) AS rich_attributesFROM place_geocoded gWHERE metadata_dictionary_id = ANY (ARRAY [70, 71, 72, 73, 74, 75, 76, 77) 78]) group by place_geocoded.place_id) m2 on m1.place_id = m2.place_idfull join (SELECT place_attributes.place_id, string_agg (concat (place_attributes.metadata_dictionary_id,'@-@', place_attributes.value),'``:: text) AS rich_attributes FROMplace_attributes a WHERE place_attributes.metadata_dictionary_id = ANY (ARRAY [5, 7, 9, 10, 11, 12, 13, 33, 35, 46, 51, 52, 53, 54, 55, 57, 58) 59]) group by a.place_id) m3 on m2.place_id = m3.place_id
(2) Syslog message:
Dec 27 10:39:13 shb-postgresql-01 kernel: Out of memory: Kill process 9116 (postgres) score 823 or sacrifice child
Dec 27 10:39:13 shb-postgresql-01 kernel: Killed process 9116, UID 501, (postgres) total-vm:40440476kB, anon-rss:28320224kB, file-rss:2154596kB
Dec 27 10:39:13 shb-postgresql-01 kernel: postgres: page allocation failure. Order:0, mode:0x280da
Dec 27 10:39:13 shb-postgresql-01 kernel: Pid: 9116, comm: postgres Not tainted 2.6.32-431.el6.x86_64 # 1
(3) Database log:
Less / usr/local/pgsql/data/pg_log/postgresql-2017-12-29_000000.logWARNING: terminating connection because of crash of another server processDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.HINT: In a moment you should be able to reconnect to the database and repeat your command.FATAL: the database system is in recovery modeLOG: all server processes terminated; reinitializingLOG: database system was interrupted Last known up at 2017-12-29 09:20:36 CSTFATAL: the database system is in recovery modeFATAL: the database system is in recovery modeLOG: database system was not properly shut down; automatic recovery in progressLOG: invalid record length at 94A/478309C0LOG: redo is not requiredLOG: MultiXact member wraparound protections are now enabledLOG: database system is ready to accept connectionsLOG: autovacuum launcher started
(4) memory changes when the script is running:
# free-g
Total used free shared buffers cached
Mem: 31 31 0 0 0
-/ + buffers/cache: 31 0
Swap: 7 3 4
Reason: through the above information, we can find that the query SQL of postgres runs out of memory and starts to use swap partitions, resulting in oom-kill in the system, database processes are killed, and then restart and restore.
Analysis process
(1) Table place_external_points and place_geocoded are both empty, and table place_attributes contains more than 100 million pieces of data, so the problem occurs on the last select xxx from place_attributes statement, and the column metadata_dictionary_id of this table has a corresponding index.
#\ d place_attributes Table "denali_search_cn_17q3_20171226_epl.place_attributes" Column | Type | Modifiers-+-+- -place_id | integer | metadata_dictionary_id | integer | value | text | lang | character varying (50) | source | character varying (50) | create_time | timestamp (6) Without time zone | default now () update_time | timestamp (6) without time zone | Indexes: "place_attributes_metadata_dictionary_id_idx" btree (metadata_dictionary_id) "place_attributes_place_id_metadata_dictionary_id_idx" btree (place_id) Metadata_dictionary_id) "place_attributes_value_idx" btree (value)
(2) through the execution plan explain, it is found that hash aggregation is used: hashaggregate,SQL itself is group by and does not use group aggregation, but uses hash aggregation. It can be preliminarily determined that the database execution plan is not optimal:
# explain SELECT place_id, string_agg (concat (metadata_dictionary_id,'@-@', value),'``:: text) AS rich_attributes FROM place_attributes WHERE metadata_dictionary_id = ANY (ARRAY [5, 7, 9, 10, 11, 12, 13, 33, 35, 46, 51, 52, 53, 54, 55, 57, 58, 59]) group by place_attributes.place_id QUERY PLAN -HashAggregate (cost=7622781.80..7985216.03 rows=323575 width=62) Group Key: place_id-> Sort (cost=7622781.80..7712379.18 rows=35838955 width=62) Sort Key: place _ id-> Bitmap Heap Scan on place_attributes (cost=450351.86..2452151.90 rows=35838955 width=62) Recheck Cond: (metadata_dictionary_id = ANY ('{5pd7) )-> Bitmap Index Scan on place_attributes_metadata_dictionary_id_idx (cost=0.00..441392.12 rows=35838955 width=0) Index Cond: (metadata_dictionary_id = ANY) (metadata_dictionary_id: integer59):: integer59}': integer59}':: integer59}':: integer59])
Solution method
Update the statistical analysis and then select the optimal implementation plan
(Updates statistics used by the planner to determine the most efficient way to execute a query.)
Postgres=# vacuum analyze place_attributes
Or temporarily shut down enable_hashagg:
(Enables or disables the query planner's use of hashed aggregation plan types. The default is on)
# set enable_hashagg = 0
Check the execution plan again and find that the execution plan no longer goes to hashaggregate
Yangzi=# explain SELECT place_id, string_agg (concat (metadata_dictionary_id,'@-@', value),'``:: text) AS rich_attributes FROM place_attributes WHERE metadata_dictionary_id = ANY (ARRAY [5, 7, 9, 10, 11, 12, 13, 33, 35, 46, 51, 52, 53, 54, 55, 57, 58, 59]) group by place_attributes.place_id QUERY PLAN -GroupAggregate (cost=7622781.80..7985216.03 rows=323575 width=62) Group Key: place_id-> Sort (cost=7622781.80..7712379.18 rows=35838955 width=62) Sort Key: place _ id-> Bitmap Heap Scan on place_attributes (cost=450351.86..2452151.90 rows=35838955 width=62) Recheck Cond: (metadata_dictionary_id = ANY ('{5pd7) )-> Bitmap Index Scan on place_attributes_metadata_dictionary_id_idx (cost=0.00..441392.12 rows=35838955 width=0) Index Cond: (metadata_dictionary_id = ANY) (metadata_dictionary_id: integer59):: integer59}': integer59}':: integer59}':: integer59])
Finally, to execute the original SQL, no error was reported, and it was completed quickly.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.