In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to read PostgreSQL data in detail. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.
PostgreSQL uses shared_buffers to cache blocks in memory. The idea is to reduce disk Ibind O and speed up the database in the most efficient way. During normal operation, database caching will be useful and ensure good response time. But what happens if the database instance is restarted for some reason? The performance of your PostgreSQL database will be affected until your Imax O cache fills up again. This takes some time and can seriously affect query response time.
In PostgreSQL 11, a new autoprewarm feature is added to the contrib module pg_prewarm. This will automatically warm the shared buffer with the same page held before the last server restart. To achieve this, Postgres now has a background worker to periodically record the contents of the shared buffer in the file-"autoprewarm.blocks". Next, it will reload these pages after the server restarts.
PostgreSQL can provide warm-up buffering through pg_prewarm, and pg_prewam provides two functions, manual buffering and automatic buffering.
How to install pg_prewarm and autoprewarm, the following is based on pg 11 version, in PG11 if you compile and install, the default pg_prewarm will be installed in PG, to be exact, "autoprewarm master" will periodically record the page information in the shared buffer in the file "$PGDATA/autoprewarm.blocks". How often "autoprewarm" is updated. Block "is determined by the configuration parameter pg_prewarm_autoprewarm_interval. Once the server is restarted, the master server will read" autoprewarm ". Block" and sort the list of pages to warm up. Next, it will start one worker at a time for each database. Each database worker (that is, the autoprewarm worker) will then load the pages that belong to its database.
Installation takes only two steps
1 needs to be added in shared_preload_libraries
Of course, you can also execute the following orders
Alter system set shared_preload_libraries = 'pg_prewarm'
After starting the server, you can see this thread of autoprewarm master
And you can see the newly generated files in the data directory
2 execute in your database (has nothing to do with preheating)
Create extension pg_prewarm
Load your extension
After doing this, we do a test, and we analyze and query on the preheated machine that is turned on.
You can see that I have already left cache for the second time.
We close postgresql, restart PG, run it again, and see that the query in the figure has gone to cache.
If we shut down the database and delete autoprewarm.blocks during the shutdown, after restarting PG, we'll see what happens.
It is clear that there is no buffering in the query.
What exactly is stored in that file?
The first line represents the total number of pages, and each subsequent line represents information about the page. Each page is uniquely represented by the database oid, tablespace oid, relational relfilenode, fork file number, and block number.
We can take a look at the pg_prewarm function in the system. In PG China, the function is written in C language. Of course, if you use C language to write PG function, the efficiency is very high.
CREATE OR REPLACE FUNCTION public.pg_prewarm (
Regclass
Mode text DEFAULT 'buffer'::text
Fork text DEFAULT 'main'::text
First_block bigint DEFAULT NULL::bigint
Last_block bigint DEFAULT NULL::bigint)
RETURNS bigint
LANGUAGE 'c'
COST 1
VOLATILE PARALLEL SAFE
AS'$libdir/pg_prewarm', 'pg_prewarm'
Where four parameters are passed to function 1, passing the name of the prewarm table, the fork type of the third parameter table of mode 3 of 2 prewarm, and the last two are the block number of the start and end
We will do the following tests to see if the speed at which the data is sent to the buffer increases under different modes, and what are the differences between different modes.
Above is a single table 1.3G SIZE table.
Next we need to do an overall COUNT operation on this table. Let's see the difference between using buffering and not using it.
1 We can run naked first. 1. When a 3G-sized meter performs COUNT operation, the overall speed can give the result in less than 2 seconds (I am sorry, my Imax O system is SSD)
2 We pre-read the data
Select pg_prewarm ('bloom_table','read','main')
3 We adjust the mode to buffer mode
Explain analyze select count (*) from bloom_table
Finally, we use prefetch mode for pre-reading asynchronously.
Basically, there is not much difference between the three methods, but in fact read is a mode that uses OS system buffering, buffer uses postgresql's buffer mode, and prefetch uses asynchronous read data mode. Another thing to note is that reading a large table into the buffer is different from pointing to the statement directly from the big table. If you think that you can read the data into the buffer only through select * from table, postgresql will not allow this to happen. Generally, such an operation can only pre-read a small part of the data into the buffer. Of course, if your memory is relatively small, you should pay attention to this problem. The problem caused by suddenly reading a large table into the buffer may be squeezing out the data you are using from your buffer, which is not a good operation.
This is the end of the article on "how to read PostgreSQL data?". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.