In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1 、 pg_stat_database
Yzs=# select * from pg_stat_database -[RECORD 1]-- +-- datid | 13156 # database oiddatname | postgres # database name numbackends | 0 # number of connections to the current database Xact_commit | 2357 # Total number of transactions committed in this database: and the following rollback and statistics xact_rollback as TPS | 17 # Total rollback of transactions in this database If there are too many, you need to see if there is a problem with the business blks_read | 1946 # the total number of blocks physically read from the disk. The read here may be read from cache. If it is very high to check whether reading from disk really exists in combination with blk_read_time blks_hit | 103625 # number of blocks hit from shared buffer tup_returned | 1413113 # for a table, it is the number of rows scanned by the whole table For the number of index rows returned by the index, if this value is significantly greater than tup_fetched, there are a large number of full table scans in the current database. View the execution plan This is tup_fetched at the global level of databas | 36041 # refers to the number of rows returned by the index tup_inserted | # inserted rows tup_updated | 0 # updated rows tup_deleted | 19 # deleted Number of rows conflicts | 0 # number of queries cancelled in conflict with recovery Temp_files only occurs on the slave | the number of temporary files generated by 0 #. If this value is high, you need to increase the work_memtemp_bytes | 0 # temporary file size deadlocks | number of 0 # deadlocks If this value is high, there is something wrong with the business logic blk_read_time | time spent reading files in 0 # database. A high value indicates that the memory is small and the data file blk_write_time needs to be read frequently from disk. | time spent writing data files in 0 # database. Dirty pages in pg are generally written to page cache. If this value is high, it means that the cache is smaller and the cache of the operating system needs to be written to stats_reset more actively | 2019-02-11 23-42purl 37.526743-08 # time of statistics reset
Through pg_stat_database, you can get a general idea of the history of the database.
For example, the value of tup_returned is significantly larger than that of tup_fetched, many historical SQL statements are full table scans, and there are SQL without indexes. You can find slow SQL with pg_stat_statments, or you can find the table with the largest number of full table scans and rows with pg_stat_user_table.
By seeing that the tup_updated is very high, it can show that the database is updated frequently. At this time, we need to pay attention to vaccum-related indicators and long transactions. If garbage collection is not carried out in time, it will cause table expansion.
Higher temp_files indicates that there are a lot of sorting, hash, or aggregation operations, which can increase work_mem and reduce the generation of temporary files, and the performance of these operations will be greatly improved.
2 、 pg_stat_user_tables
Yzs=# select * from pg_stat_user_tables -[RECORD 1]-+-- relid | 16440 # table oidschemaname | public # mode name relname | T1 # table name seq_scan | 50 # the number of full table scans performed by this table seq_ Tup_read | 1867763 # number of data rows scanned by the full table If this value is large, the SQL statement that operates on the table is likely to be a full table scan. Idx_scan needs to be analyzed in combination with the execution plan | # number of index scans idx_tup_fetch | # number of rows returned by index scan n_tup_ins | 1130502 # number of data rows inserted n_tup_upd | 0 # updated rows n_tup_del | | 81920 # number of deleted rows n_tup_hot_upd | number of rows of 0 # hot update | This value is close to n_tup_upd, indicating that the update performance is better. There is no need to update the index n_live_tup | 655366 # number of live rows n_dead_tup | 0 # number of dead records n_mod_since_analyze | 6 # actual last_vacuum of the last analyze | 2019-04-07 00 n_mod_since_analyze 22 last_vacuum 00.955542-07 # actual last_autovacuum of the last manual vacuum | # Last autovacuum Actual last_analyze | # Last analyze time last_autoanalyze | 2019-04-07 00 analyze 26 analyze time vacuum_count | 2 # vacuum times autovacuum_count | 0 # automatic vacuum times analyze_count | 0 # analyze times autoanalyze_count | 10 # automatic analyze times
By querying pg_stat_user_tables, you can basically clear which tables have more full table scans, which table has more DML operations, and you can also know the amount of junk data.
3 、 pg_stat_user_indexes
Yzs=# select * from pg_stat_user_indexes -[RECORD 1]-+-relid | oidindexrelid of 16447 # related tables | oidschemaname of 16450 # index | public # mode name relname | T3 # table name indexrelname | t3_id_idx # index name idx_scan | number of times 0 # scanned through the index. If the value is very small, the index is rarely used. Consider deleting idx_tup_read | 0 # number of index rows returned by any index method idx_tup_fetch | 0 # number of data rows returned by index method
You can know which indexes are currently used frequently and which are invalid indexes. Invalid indexes can be deleted to reduce disk space usage and improve the performance of insert, delete, and update.
4 、 pg_statio_user_tables
Yzs=# select * from pg_statio_user_tables -[RECORD 1]-- +-relid | 16447 schemaname | publicrelname | t3heap_blks_read | 1 # number of blocks read from page cache or disk heap_blks_hit | 1 # number of blocks hit from shared buffer | 0 # number of blocks read from page cache or disk idx_blks_hit | 0 # hit from shared buffer Number of blocks toast_blks_read | # number of blocks of toast read from page cache or disk toast_blks_hit | # number of blocks that hit toast table in shared buffer | # number of blocks of toast table index read from page cache or disk tidx_blks_hit | # number of blocks that hit toast table index in shared buffer
If the heap_blks_read and idx_blks_read are high, the shared buffer is small, and there are blocks that are frequently read from disk or page cache to shared buffer and hit the toast table.
5 、 pg_stat_bgwriter
Yzs=# select * from pg_stat_bgwriter -[RECORD 1]-+-- checkpoints_timed | 206# refers to the number of checkpoints triggered after the time of checkpoint_timeout is exceeded checkpoints_req | 8 # manually triggers checkpoint or because the number of WAL files reaches max_wal_size will also increase If this value is greater than checkpoints_req, it means that the checkpoint_timeout setting is unreasonable checkpoint_write_time | 306582 # time spent from write to page cache in shared buffer checkpoint_sync_time | 367 # checkpoint calls fsync to brush dirty data to disk. If this value is very long, it is easy to cause IO jitter. Need to increase checkpoint_timeout or checkpoint_completion_targetbuffers_checkpoint | 6671 # number of dirty blocks written via checkpoint buffers_clean | 0 # number of blocks written via bgwriter maxwritten_clean | number of stops when 0 # bgwriter exceeds bgwriter_lru_maxpages, if this value is high Need to increase bgwriter_lru_maxpagesbuffers_backend | 7953 # number of blocks written through backend buffers_backend_fsync | number of times fsync is required for 0 # backend | 11613 # number of buffers allocated stats_reset | 2019-02-11 23 42buffers_backend_fsync 35.273758-08
From this view, you can judge whether checkpoint and max_wal_size are reasonable.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.