Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the log file parameters and precautions of PostgreSQL

2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "what are the log file parameters and matters needing attention of PostgreSQL". In the daily operation, I believe that many people have doubts about the log file parameters and matters needing attention of PostgreSQL. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "what are the log file parameters and matters needing attention of PostgreSQL". Next, please follow the editor to study!

Run log parameters

1.1 main parameters of the running log

The main parameters related to the running log are as follows. If it is not enabled by default, there is no log directory, and it will be automatically generated when enabled.

Parameters.

Optional value / description

Og_destination = 'csvlog'

# stderr, csvlog, syslog, and eventlog, csvlog requires logging_collector to be on

Generally choose this, you can import the csv log into the database to view

Logging_collector = on

# Enable capturing of stderr and csvlog into log files

To select logs in scv format, you must set this parameter on. If you modify it, restart it.

Log_directory = 'log'

Log output directory

Log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log

Log output directory

Log_file_mode = 0600

Log file name format

Log_truncate_on_rotation = on

# creation mode for log files log file permissions

Log_rotation_age = 1d

Set reuse log

Log_rotation_size = 10MB

How long does it take to reuse logs

Log_min_messages = warning

How much reuse does the log achieve?

Log_min_duration_statement = 60

# debug5,debug4,debug3,debug2,debug1,info,notice,warning,error,log,fatal,panic

Log_checkpoints = on

Slow sql record (how long sql is over)

Log_connections = on

Record checkpoint operation

Log_disconnections = on

Record session disconnect operation

Log_duration = on

Record sql execution time

Log_lock_waits = on

# log lock waits > = deadlock_timeout records long blocking

Log_statement = 'ddl'

# none, ddl, mod, all record ddl

1.2 points for consideration

If you set the log in csv format, be sure to set logging_collector to on

The running log of the pg10 version is generally located in the $PGDATA/log directory

The log directory is automatically generated when the running log is enabled.

You can use log_rotation_age to set how often a log file is regenerated.

You can use log_rotation_size to set the size of the log to regenerate the log file

Both of the above need to be used with log_truncate_on_rotation for on

You can open log_duration to record the execution time of sql

You can turn on log_statement to record database ddl

1.3 csv logs are loaded into the database

Oracle has external tables, and pg also has fdw. Oracle can load alert logs into the database in the form of external tables and view them with SQL. PG can use the copy command to load the csv log into the database and view it with SQL. In this way, it is convenient to use sql to query the desired log content. The obvious thing about this approach is that you can easily query and filter logs with SQL, and pg log files can be truncated and divided into small files that can be loaded into the logs you need. The alert of Oracle is usually very large.

The disadvantage is also obvious, if the database is down, you can't view the log in this way. And pg's csv log is not easy to read directly.

1.3.1 create a log table

Create a database and a new table to load the log

Postgres=# create database test

CREATE DATABASE

Postgres=#\ c test

You are now connected to database "test" as user "pg12".

Test=# CREATE TABLE pg_log

Test-# (

Test (# log_time timestamp (3) with time zone

Test (# user_name text

Test (# database_name text

Test (# process_id integer

Test (# connection_from text

Test (# session_id text

Test (# session_line_num bigint

Test (# command_tag text

Test (# session_start_time timestamp with time zone

Test (# virtual_transaction_id text

Test (# transaction_id bigint

Test (# error_severity text

Test (# sql_state_code text

Test (# message text

Test (# detail text

Test (# hint text

Test (# internal_query text

Test (# internal_query_pos integer

Test (# context text

Test (# query text

Test (# query_pos integer

Test (# location text

Test (# application_name text

Test (# PRIMARY KEY (session_id, session_line_num)

Test (#)

CREATE TABLE

Test=#

1.3.2 View log file name

[pg12@whf307 ~] $cd $PGDATA/log

[pg12@whf307 log] $ls-rtl

Total 24

-rw- 1 pg12 pg12 166 May 30 13:32 postgresql-2019-05-30_133202.log

-rw- 1 pg12 pg12 496 May 30 13:32 postgresql-2019-05-30_133202.csv

-rw- 1 pg12 pg12 0 May 30 13:32 postgresql-2019-05-30_133254.log

-rw- 1 pg12 pg12 170 May 30 13:32 postgresql-2019-05-30_133254.csv

-rw- 1 pg12 pg12 166 May 30 13:33 postgresql-2019-05-30_133324.log

-rw- 1 pg12 pg12 6566 May 30 16:16 postgresql-2019-05-30_133324.csv

-rw- 1 pg12 pg12 0 May 31 00:00 postgresql-2019-05-31_000000.log

-rw- 1 pg12 pg12 0 May 31 00:00 postgresql-2019-05-31_000000.csv

[pg12@whf307 log] $

[pg12@whf307 log] $pwd

/ soft/pg_data/log

[pg12@whf307 log] $

1.3.3 load into the database

[pg12@whf307 log] $psql test

Psql (12beta1)

Type "help" for help.

Test=#\ d

List of relations

Schema | Name | Type | Owner

-+-

Public | pg_log | table | pg12

(1 row)

Test=# copy pg_log from'/ soft/pg_data/log/postgresql-2019-05-30mm 133324.csv' with csv

COPY 32

1.3.4 View Log

So you can use sql to view it. Execute a normal query

Test=# select relfilenode from pg_class where relname='pg_log'

Relfilenode

-

16385

(1 row)

Load the latest log. It can be loaded repeatedly and the previous data will not be overwritten.

[pg12@whf307 log] $ls-rtl

Total 32

-rw- 1 pg12 pg12 166 May 30 13:32 postgresql-2019-05-30_133202.log

-rw- 1 pg12 pg12 496 May 30 13:32 postgresql-2019-05-30_133202.csv

-rw- 1 pg12 pg12 0 May 30 13:32 postgresql-2019-05-30_133254.log

-rw- 1 pg12 pg12 170 May 30 13:32 postgresql-2019-05-30_133254.csv

-rw- 1 pg12 pg12 166 May 30 13:33 postgresql-2019-05-30_133324.log

-rw- 1 pg12 pg12 6566 May 30 16:16 postgresql-2019-05-30_133324.csv

-rw- 1 pg12 pg12 0 May 31 00:00 postgresql-2019-05-31_000000.log

-rw- 1 pg12 pg12 4545 May 31 00:37 postgresql-2019-05-31_000000.csv

[pg12@whf307 log] $psql test

Psql (12beta1)

Type "help" for help.

Test=# copy pg_log from'/ soft/pg_data/log/postgresql-2019-05-31cm 000000.csv 'with csv

COPY 28

Check the log again

Test=# SELECT COUNT (*) FROM PG_LOG

Count

-

sixty

(1 row)

Test=# select log_time at time zone 'UTC', database_name,connection_from,query from pg_log where log_time > to_timestamp (' 2019-05-31 14-15-35-15-5-31-14-15-00-12-5-31-14-14-35-00-5-5-31-14-14-35-00-5-5-31-14-15-15-15-15-14-15-15-15-14-15-15-14-15-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-12-12-12-12-12-12-12-12-12-12-12-5-5-31 14-15-15-15-14-15-15-14

Timezone | database_name | connection_from | query

-+-

2019-05-3106VRV 35VOV 42.843 | test | [local] |

2019-05-3106 test 3557.582 | [local] |

2019-05-31 106 selectt relfilenode from pg_class where relname='pg_log' 36 selectt relfilenode from pg_class where relname='pg_log' 54.369 | test | [local] |

2019-05-31 106 36R 58.002 | test | [local] |

2019-05-3106VOR 37RU 00.192 | test | [local] |

2019-05-31 106 3715 11.651 | | [local] |

2019-05-31 106 3715 11.651 | test | [local] |

(7 rows)

You can see that the number of records has become 60, the previous records have not been overwritten, we can always use this table, we can use sql to view all the logs of sql, database, login time, and so on.

View log start and end time

Test=# select min (log_time) at time zone 'UTC',max (log_time) at time zone' UTC' from pg_log

Timezone | timezone

-+-

2019-05-30 19VR 3324.892 | 2019-05-3106PUR 37VR 11.651

(1 row)

With a flexible data loading method, it makes it easier for SQL to deal with many problems.

At this point, the study of "what are the log file parameters and matters needing attention of PostgreSQL" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 279

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report