Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Log files and data loading of PostgreSQL

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Original: https://www.enmotech.com/web/detail/1/798/1.html

Introduction: this article mainly introduces the log file parameters and matters needing attention of PostgreSQL, and loads the database from the csv log. Through the flexible data loading method, it makes SQL more simple and convenient in dealing with many problems.

Run log parameters

1.1 main parameters of the running log

The main parameters related to the running log are as follows. If it is not enabled by default, there is no log directory, and it will be automatically generated when enabled.

1.2 points for consideration

If you set the log in csv format, be sure to set logging_collector to on

The running log of the pg10 version is generally located in the $PGDATA/log directory

The log directory is automatically generated when the running log is enabled.

You can use log_rotation_age to set how often a log file is regenerated.

You can use log_rotation_size to set the size of the log to regenerate the log file

Both of the above need to be used with log_truncate_on_rotation for on

You can open log_duration to record the execution time of sql

You can turn on log_statement to record database ddl

1.3 csv logs are loaded into the database

Oracle has external tables, and pg also has fdw. Oracle can load alert logs into the database in the form of external tables and view them with SQL. PG can use the copy command to load the csv log into the database and view it with SQL. In this way, it is convenient to use sql to query the desired log content. The obvious thing about this approach is that you can easily query and filter logs with SQL, and pg log files can be truncated and divided into small files that can be loaded into the logs you need. The alert of Oracle is usually very large.

The disadvantage is also obvious, if the database is down, you can't view the log in this way. And pg's csv log is not easy to read directly.

1.3.1 create a log table

Create a database and a new table to load the log

Postgres=# create database test

CREATE DATABASE

Postgres=#\ c test

You are now connected to database "test" as user "pg12".

Test=# CREATE TABLE pg_log

Test-# (

Test (# log_time timestamp (3) with time zone

Test (# user_name text

Test (# database_name text

Test (# process_id integer

Test (# connection_from text

Test (# session_id text

Test (# session_line_num bigint

Test (# command_tag text

Test (# session_start_time timestamp with time zone

Test (# virtual_transaction_id text

Test (# transaction_id bigint

Test (# error_severity text

Test (# sql_state_code text

Test (# message text

Test (# detail text

Test (# hint text

Test (# internal_query text

Test (# internal_query_pos integer

Test (# context text

Test (# query text

Test (# query_pos integer

Test (# location text

Test (# application_name text

Test (# PRIMARY KEY (session_id, session_line_num)

Test (#)

CREATE TABLE

Test=#

1.3.2 View log file name

[pg12@whf307 ~] $cd $PGDATA/log

[pg12@whf307 log] $ls-rtl

Total 24

-rw- 1 pg12 pg12 166 May 30 13:32 postgresql-2019-05-30_133202.log

-rw- 1 pg12 pg12 496 May 30 13:32 postgresql-2019-05-30_133202.csv

-rw- 1 pg12 pg12 0 May 30 13:32 postgresql-2019-05-30_133254.log

-rw- 1 pg12 pg12 170 May 30 13:32 postgresql-2019-05-30_133254.csv

-rw- 1 pg12 pg12 166 May 30 13:33 postgresql-2019-05-30_133324.log

-rw- 1 pg12 pg12 6566 May 30 16:16 postgresql-2019-05-30_133324.csv

-rw- 1 pg12 pg12 0 May 31 00:00 postgresql-2019-05-31_000000.log

-rw- 1 pg12 pg12 0 May 31 00:00 postgresql-2019-05-31_000000.csv

[pg12@whf307 log] $

[pg12@whf307 log] $pwd

/ soft/pg_data/log

[pg12@whf307 log] $

1.3.3 load into the database

[pg12@whf307 log] $psql test

Psql (12beta1)

Type "help" for help.

Test=#\ d

List of relations

Schema | Name | Type | Owner

-+-

Public | pg_log | table | pg12

(1 row)

Test=# copy pg_log from'/ soft/pg_data/log/postgresql-2019-05-30mm 133324.csv' with csv

COPY 32

1.3.4 View Log

So you can use sql to view it. Execute a normal query

Test=# select relfilenode from pg_class where relname='pg_log'

Relfilenode

-

16385

(1 row)

Load the latest log. It can be loaded repeatedly and the previous data will not be overwritten.

[pg12@whf307 log] $ls-rtl

Total 32

-rw- 1 pg12 pg12 166 May 30 13:32 postgresql-2019-05-30_133202.log

-rw- 1 pg12 pg12 496 May 30 13:32 postgresql-2019-05-30_133202.csv

-rw- 1 pg12 pg12 0 May 30 13:32 postgresql-2019-05-30_133254.log

-rw- 1 pg12 pg12 170 May 30 13:32 postgresql-2019-05-30_133254.csv

-rw- 1 pg12 pg12 166 May 30 13:33 postgresql-2019-05-30_133324.log

-rw- 1 pg12 pg12 6566 May 30 16:16 postgresql-2019-05-30_133324.csv

-rw- 1 pg12 pg12 0 May 31 00:00 postgresql-2019-05-31_000000.log

-rw- 1 pg12 pg12 4545 May 31 00:37 postgresql-2019-05-31_000000.csv

[pg12@whf307 log] $psql test

Psql (12beta1)

Type "help" for help.

Test=# copy pg_log from'/ soft/pg_data/log/postgresql-2019-05-31cm 000000.csv 'with csv

COPY 28

Check the log again

Test=# SELECT COUNT (*) FROM PG_LOG

Count

-

sixty

(1 row)

Test=# select log_time at time zone 'UTC', database_name,connection_from,query from pg_log where log_time > to_timestamp (' 2019-05-31 14-15-35-15-5-31-14-15-00-12-5-31-14-14-35-00-5-5-31-14-14-35-00-5-5-31-14-15-15-15-15-14-15-15-15-14-15-15-14-15-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-14-15-15-12-12-12-12-12-12-12-12-12-12-12-5-5-31 14-15-15-15-14-15-15-14

Timezone | database_name | connection_from | query

-+-

2019-05-3106VRV 35VOV 42.843 | test | [local] |

2019-05-3106 test 3557.582 | [local] |

2019-05-31 106 selectt relfilenode from pg_class where relname='pg_log' 36 selectt relfilenode from pg_class where relname='pg_log' 54.369 | test | [local] |

2019-05-31 106 36R 58.002 | test | [local] |

2019-05-3106VOR 37RU 00.192 | test | [local] |

2019-05-31 106 3715 11.651 | | [local] |

2019-05-31 106 rows 37 rows 11.651 | test | [local] | (7 rows)

You can see that the number of records has become 60, the previous records have not been overwritten, we can always use this table, we can use sql to view all the logs of sql, database, login time, and so on.

View the start and end time of the log:

Test=# select min (log_time) at time zone 'UTC',max (log_time) at time zone' UTC' from pg_log

Timezone | timezone

-+-

2019-05-30 19VR 3324.892 | 2019-05-3106PUR 37VR 11.651

(1 row)

With a flexible data loading method, it makes it easier for SQL to deal with many problems.

Want to know more about databases and cloud technologies?

Come and follow the official account of "data and Cloud" and the official website of "Yunhe Enmo". We look forward to learning and making progress with you!

(scan the QR code above and follow the official account of "data and Cloud" for more science and technology articles.)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report