In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you how PostgreSQL opens the Huge Page scene, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's learn about it!
1. Analysis of high memory utilization of large size PG instances
In order to ensure that the physical memory can be fully utilized and avoid the waste of memory space, Linux loads the part of the memory currently used by the process into the physical memory, while the part that is not used is not loaded temporarily. When the PostMaster process registers shared memory, the system only allocates a virtual address space, not physical memory directly. When there is actual memory access, CPU maps the virtual address to an address of physical memory. The one that maintains this mapping is PageTable, which is responsible for converting virtual memory addresses into physical memory addresses.
The memory management of Linux adopts paging access mechanism: the larger physical memory is divided into fixed size (4kB) memory pages to manage. Each memory page maintains a virtual / physical memory mapping through a tuple in PageTable. In order to improve the conversion efficiency between virtual and physical memory, CPU also caches a certain amount of Page Table tuples in TLB.
For a multi-process architecture program like PostgreSQL, when the server uses a large amount of shared memory and a large number of concurrent connections, because the operating system has to maintain a separate memory map for each process, the number of tuples in PageTable will become very large, and the amount of memory occupied will be very large.
2. Huge Page improvement measures
In order to cope with this scenario, Linux reduces the memory consumption of PageTable in multi-processes. Since kernel version 2. 6 and above, a memory page size of 2MB has been provided, called Huge Page. If Huge Page is used, the number of memory pages is reduced with the same physical memory usage, reducing the number of entries in the PageTable tuple, thereby reducing the memory footprint of the system.
As the most advanced open source database in the world, PostgreSQL also adapts to the Huge Page feature of Linux. When registering shared memory, the server will decide whether to apply for large page memory by configuring the parameter huge_pages.
Postgresql.conf: huge_pages = on-- large pages must be used when registering shared memory huge_pages = try-- large pages are considered first when registering shared memory. If the memory provided by the system is insufficient, ordinary pages huge_pages = off are all used. Large pages are not used when registering shared memory.
Real application scenario: a PG user deploys an instance (shared_buffers = 64GB) on an ECS with 256GB memory. When the business is busy, the ECS memory utilization is 85% 120GB. When Huge Page is enabled, the memory usage of the same business scenario is reduced to less than 50%, and the PageTable size is only 300m!
3. Enable Huge Page for PG instance
(1) check the Huge Page size of the operating system grep Hugepage / proc/meminfo
(2) estimate the Huge Page usage required by PostgreSQL instances: 128GB/2MB * 1.2 = 78643
(3) add to / etc/sysctl.conf: vm.nr_hugepages = 78643
(4) reload the system configuration parameter: sysctl-p
(5) confirm whether the configuration is successful. You can see that the total number of Huge Page is 78643
(6) confirm that the PG configuration file opens huge_pages
(7) when you start the PostgreSQL server, you can see that the free Huge Page in the system has been reduced, and some large pages have been used by shared memory.
4. Recommendations for the use of Huge Page
Although Huge Page can improve the excessive memory usage of the server in certain scenarios, it does not encourage all PG instances to use large pages. Turning on Huge Page blindly may cause server performance degradation. Let's analyze the usage scenarios according to the advantages and disadvantages of Huge Page.
Huge Page advantages:
(1) the TLB of CPU can cache more physical address space, so as to improve the hit rate of TLB and reduce the load of CPU.
(2) the memory used by Huge Page is swap, and there is no memory space swapping in / out overhead.
(3) the memory cost of maintaining PageTable is greatly reduced.
Huge Page disadvantages:
(1) the memory used by Huge Page needs to be allocated in advance.
(2) Huge Page uses a fixed memory area and will not be released
(3) for write-intensive scenarios, Huge Page will increase the probability of Cache write conflicts.
Therefore, it is highly recommended that PG instances enable Huge Page: shared memory is used heavily (> = 8GB) and the number of connections is large (> = 500), and the hot spot data is scattered. Scenarios where Huge Page is enabled for PG instances are not recommended: write-intensive, hot data sets and low memory usage are recommended.
Matters needing attention when 5.PG opens Huge Page
(1) when the configuration parameter huge_pages is set to on, if the shared memory that needs to be registered when PG starts is larger than the Huge Page provided by the operating system, the database will not be able to start. It is recommended to set the huge_pages parameter to try. In this scenario, PostMaster will apply for normal memory instead.
(2) when modifying GUC parameters related to shared memory such as shared_buffers/wal_buffers, you need to recalculate the number of Huge Page required by the operating system, in case the server cannot start or some large pages of memory are not used and cannot be released.
The above is all the content of the article "how PostgreSQL starts the Huge Page scene". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.