Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the operating system kernel parameters of DBA

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail what the operating system kernel parameters of DBA are, and the content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.

DBA must know the background of operating system kernel parameters

Operating system in order to adapt to more hardware environment, many initial settings, tolerance is very high.

If not adjusted, these values may not be adapted to HPC, or slightly better hardware environments.

Unable to perform better hardware performance, and may even affect the use of some application software, especially databases.

OS kernel parameters of concern to the database

512GB memory as an example

1.

Parameters.

Fs.aio-max-nr

Support system

CentOS 6, 7

Parameter interpretation

Aio-nr & aio-max-nr:. Aio-nr is the running total of the number of events specified on the io_setup system call for all currently active aio contexts. . If aio-nr reaches aio-max-nr then io_setup will fail with EAGAIN. . Note that raising aio-max-nr does not result in the pre-allocation or re-sizing of any kernel data structures. . Aio-nr & aio-max-nr:. Aio-nr shows the current system-wide number of asynchronous io requests. . Aio-max-nr allows you to change the maximum value aio-nr can grow to.

Recommended settin

Fs.aio-max-nr = 1xxxxxx. Neither PostgreSQL nor Greenplum used io_setup to create aio contexts. No setting is required. If you want to use aio in the Oracle database, you need to set it up. There is no harm in setting it, and if you need to adapt to asynchronous IO in the future, you don't need to modify this setting again.

two。

Parameters.

Fs.file-max

Support system

CentOS 6, 7

Parameter interpretation

File-max & file-nr:. The value in file-max denotes the maximum number of file handles that the Linux kernel will allocate. . When you get lots of error messages about running out of file handles, you might want to increase this limit. . Historically, the kernel was able to allocate file handles dynamically, but not to free them again. . The three values in file-nr denote: the number of allocated file handles, the number of allocated but unused file handles, the maximum number of file handles. . Linux 2.6always reports 0 as the number of free file handles-- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. . Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached".

Recommended settin

Fs.file-max = 7xxxxxxx. PostgreSQL has a set of VFS managed by itself, and there is a mapping mechanism between open FD and kernel-managed file opening and closing, so you don't need to use so much file handlers in real life. Max_files_per_process parameter. Assuming that 1GB memory supports 100 connections, each connection opens 1000 files, then a PG instance needs to open 100000 files, while a machine can run 50 million PG instances with 512G memory, it needs 50 million file handler. The above settings are more than sufficient.

3.

Parameters.

Kernel.core_pattern

Support system

CentOS 6, 7

Parameter interpretation

Core_pattern:. Core_pattern is used to specify a core dumpfile pattern name. . Max length 128 characters; default value is "core". Core_pattern is used as a pattern template for the output filename; certain string patterns (beginning with'%') are substituted with their actual values. . Backward compatibility with core_uses_pid: If core_pattern does not include "% p" (default does not) and core_uses_pid is set, then .pid will be appended to the filename. . Corename format specifiers:%'%'is dropped%% output one'%'% p pid% P global pid (init PID namespace)% I tid% I global tid (init PID namespace)% u uid% g gid% d dump mode Matches PR_SET_DUMPABLE and / proc/sys/fs/suid_dumpable s signal number t UNIX time of dump h hostname e executable filename (may be shortened) E executable path both are dropped. If the first character of the pattern is a'|', the kernel will treat the rest of the pattern as a command to run. The core dump will be written to the standard input of that program instead of to a file.

Recommended settin

Kernel.core_pattern = / xxx/core_%e_%u_%t_%s.%p. This directory needs 777 permissions. If it is a soft chain, the real directory needs 777 permissions mkdir / xxx chmod 777 / xxx to leave enough space.

4.

Parameters.

Kernel.sem

Support system

CentOS 6, 7

Parameter interpretation

Kernel.sem = 4096 2147483647 2147483646 512000. 4096 number of semaphores per group (> = 17, PostgreSQL needs 17 semaphores for every 16 processes), 2147483647 total number of semaphores (2 ^ 31-1 and greater than 4096 processes 512000), 2147483646 how many operations are supported per semop () call (2 ^ 31-1), and how many semaphores are 512000 (assuming 100connections per GB, 51200 connections are supported by 512GB, plus other processes > 512002.Compact 16 is more than enough). # sysctl-w kernel.sem= "4096 2147483647 2147483646 512000". # ipcs-s-l-Semaphore Limits-max number of arrays = 512000 max semaphores per array = 4096 max semaphores system wide = 2147483647 max ops per semop call = 2147483646 semaphore max value = 32767

Recommended settin

Kernel.sem = 4096 2147483647 2147483646 512000. 4096 may be suitable for more scenarios, so it doesn't hurt to be bigger, the key is that 512000 arrays is enough.

5.

Parameters.

Kernel.shmall = 107374182 kernel.shmmax = 274877906944 kernel.shmmni = 819200

Support system

CentOS 6, 7

Parameter interpretation

Suppose the host memory 512GB. Shmmax maximum of a single shared memory segment 256GB (half of the host memory, in bytes) shmall all shared memory segments add up to the largest (80% of the host memory, unit PAGE) shmmni allows a total of 819200 shared memory segments (2 shared memory segments are required for each database startup. Dynamic creation of shared memory segments will be allowed in the future, which may be in greater demand. # getconf PAGE_SIZE 4096

Recommended settin

Kernel.shmall = 107374182 kernel.shmmax = 274877906944 kernel.shmmni = 819200. 9.2 and previous versions, when the database was started, the memory requirements for shared memory segments were very large. The following points need to be considered: Connections: (1800 + 270,270 * max_locks_per_transaction) * max_connections Autovacuum workers: (1800 + 270,270 * max_locks_per_transaction) * autovacuum_max_workers Prepared transactions: (770,270 * max_locks_per_transaction) * max_prepared_transactions Shared disk buffers: (block_size + 208) * shared_buffers WAL buffers: (wal_block_size + 8) * wal_buffers Fixed space requirements: 770kB. The above recommended parameters are set according to the version prior to 9.2, and the later version is also applicable.

6.

Parameters.

Net.core.netdev_max_backlog

Support system

CentOS 6, 7

Parameter interpretation

Netdev_max_backlog-Maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.

Recommended settin

Net.core.netdev_max_backlog=1xxxx. The longer the INPUT linked list, the more expensive the processing, which needs to be increased if iptables management is used.

7.

Parameters.

Net.core.rmem_default net.core.rmem_max net.core.wmem_default net.core.wmem_max

Support system

CentOS 6, 7

Parameter interpretation

Rmem_default-The default setting of the socket receive buffer in bytes. . Rmem_max-The maximum receive socket buffer size in bytes. . Wmem_default-The default setting (in bytes) of the socket send buffer. . Wmem_max-The maximum send socket buffer size in bytes.

Recommended settin

Net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 4194304

8.

Parameters.

Net.core.somaxconn

Support system

CentOS 6, 7

Parameter interpretation

Somaxconn-INTEGER Limit of socket listen () backlog, known in userspace as SOMAXCONN. Defaults to 128. See also tcp_max_syn_backlog for additional tuning for TCP sockets.

Recommended settin

Net.core.somaxconn=4xxx

9.

Parameters.

Net.ipv4.tcp_max_syn_backlog

Support system

CentOS 6, 7

Parameter interpretation

Tcp_max_syn_backlog-INTEGER Maximal number of remembered connection requests, which have not received an acknowledgment from connecting client. The minimal value is 128 for low memory machines, and it will increase in proportion to the memory of machine. If server suffers from overload, try increasing this number.

Recommended settin

Net.ipv4.tcp_max_syn_backlog=4xxx pgpool-II uses this value to queue connections other than num_init_child. So this value determines how many connections can wait in the queue.

10.

Parameters.

Net.ipv4.tcp_keepalive_intvl=20 net.ipv4.tcp_keepalive_probes=3 net.ipv4.tcp_keepalive_time=60

Support system

CentOS 6, 7

Parameter interpretation

Tcp_keepalive_time-INTEGER How often TCP sends out keepalive messages when keepalive is enabled. Default: 2hours. . Tcp_keepalive_probes-INTEGER How many keepalive probes TCP sends out, until it decides that the connection is broken. Default value: 9.. Tcp_keepalive_intvl-INTEGER How frequently the probes are send out. Multiplied by tcp_keepalive_probes it is time to kill not responding connection, after probes started. Default value: 75sec I.E. Connection will be aborted after ~ 11 minutes of retries.

Recommended settin

Net.ipv4.tcp_keepalive_intvl=20 net.ipv4.tcp_keepalive_probes=3 net.ipv4.tcp_keepalive_time=60. After the connection is idle for 60 seconds, send the heartbeat packet every 20 seconds, try the heartbeat packet 3 times and do not respond, close the connection. It took a total of 120 seconds from being idle to closing the connection.

11.

Parameters.

Net.ipv4.tcp_mem=8388608 12582912 16777216

Support system

CentOS 6, 7

Parameter interpretation

Tcp_mem-vector of 3 INTEGERs: min, pressure, max unit page min: below this number of pages TCP is not bothered about its memory appetite. . Pressure: when amount of memory allocated by TCP exceeds this number of pages, TCP moderates its memory consumption and enters memory pressure mode, which is exited when memory consumption falls under "min". . Max: number of pages allowed for queueing by all TCP sockets. . Defaults are calculated at boot time from amount of available memory. 64GB memory, the automatically calculated value is like this net.ipv4.tcp_mem = 1539615 2052821 3079230. 512GB memory, the automatically calculated value is like this net.ipv4.tcp_mem = 49621632 66162176 99243264. This parameter allows the operating system to calculate automatically when it starts, and it's not a problem.

Recommended settin

Net.ipv4.tcp_mem=8388608 12582912 16777216. This parameter allows the operating system to calculate automatically when it starts, and it's not a problem.

twelve。

Parameters.

Net.ipv4.tcp_fin_timeout

Support system

CentOS 6, 7

Parameter interpretation

Tcp_fin_timeout-INTEGER The length of time an orphaned (no longer referenced by any application) connection will remain in the FIN_WAIT_2 state before it is aborted at the local end. While a perfectly valid "receive only" state for an un-orphaned connection, an orphaned connection in FIN_WAIT_2 state could otherwise wait forever for the remote to close its end of the connection. Cf. Tcp_max_orphans Default: 60 seconds

Recommended settin

Net.ipv4.tcp_fin_timeout=5. Speed up the recovery of zombie connections

13.

Parameters.

Net.ipv4.tcp_synack_retries

Support system

CentOS 6, 7

Parameter interpretation

Tcp_synack_retries-INTEGER Number of times SYNACKs for a passive TCP connection attempt will be retransmitted. Should not be higher than 255. Default value is 5, which corresponds to 31seconds till the last retransmission with the current initial RTO of 1second. With this the final timeout for a passive TCP connection will happen after 63seconds.

Recommended settin

Net.ipv4.tcp_synack_retries=2. Reduce tcp syncack timeout

14.

Parameters.

Net.ipv4.tcp_syncookies

Support system

CentOS 6, 7

Parameter interpretation

Tcp_syncookies-BOOLEAN Only valid when the kernel was compiled with CONFIG_SYN_COOKIES Send out syncookies when the syn backlog queue of a socket overflows. This is to prevent against the common 'SYN flood attack' Default: 1. Note, that syncookies is fallback facility. It MUST NOT be used to help highly loaded servers to stand against legal connection rate. If you see SYN flood warnings in your logs, but investigation shows that they occur because of overload with legal connections, you should tune another parameters until this warning disappear. See: tcp_max_syn_backlog, tcp_synack_retries, tcp_abort_on_overflow. . Syncookies seriously violate TCP protocol, do not allow to use TCP extensions, can result in serious degradation of some services (f.e. SMTP relaying), visible not by you, but your clients and relays, contacting you. While you see SYN flood warnings in logs not being really flooded, your server is seriously misconfigured. . If you want to test which effects syncookies have to your network connections you can set this knob to 2 to enable unconditionally generation of syncookies.

Recommended settin

Net.ipv4.tcp_syncookies=1. Prevent syn flood attacks

15.

Parameters.

Net.ipv4.tcp_timestamps

Support system

CentOS 6, 7

Parameter interpretation

Tcp_timestamps-BOOLEAN Enable timestamps as defined in RFC1323.

Recommended settin

Net.ipv4.tcp_timestamps=1. Tcp_timestamps is an extension of tcp protocol. Detecting packets by timestamp to prevent PAWS (Protect Against Wrapped Sequence numbers) can improve the performance of tcp.

16.

Parameters.

Net.ipv4.tcp_tw_recycle net.ipv4.tcp_tw_reuse net.ipv4.tcp_max_tw_buckets

Support system

CentOS 6, 7

Parameter interpretation

Tcp_tw_recycle-BOOLEAN Enable fast recycling TIME-WAIT sockets. Default value is 0. It should not be changed without advice/request of technical experts. . Tcp_tw_reuse-BOOLEAN Allow to reuse TIME-WAIT sockets for new connections when it is safe from protocol viewpoint. Default value is 0. It should not be changed without advice/request of technical experts. . Tcp_max_tw_buckets-INTEGER Maximal number of timewait sockets held by system simultaneously. If this number is exceeded time-wait socket is immediately destroyed and warning is printed. This limit exists only to prevent simple DoS attacks, you _ must_ not lower the limit artificially, but rather increase it (probably, after increasing installed memory), if network conditions require more than default value.

Recommended settin

Net.ipv4.tcp_tw_recycle=0 net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_max_tw_buckets = 2xxxxx. Net.ipv4.tcp_tw_recycle and net.ipv4.tcp_timestamps are not recommended to turn on at the same time

17.

Parameters.

Net.ipv4.tcp_rmem net.ipv4.tcp_wmem

Support system

CentOS 6, 7

Parameter interpretation

Tcp_wmem-vector of 3 INTEGERs: min, default, max min: Amount of memory reserved for send buffers for TCP sockets. Each TCP socket has rights to use it due to fact of its birth. Default: 1 page. Default: initial size of send buffer used by TCP sockets. This value overrides net.core.wmem_default used by other protocols. It is usually lower than net.core.wmem_default. Default: 16K. Max: Maximal amount of memory allowed for automatically tuned send buffers for TCP sockets. This value does not override net.core.wmem_max. Calling setsockopt () with SO_SNDBUF disables automatic tuning of that socket's send buffer size, in which case this value is ignored. Default: between 64K and 4MB, depending on RAM size. . Tcp_rmem-vector of 3 INTEGERs: min, default, max min: Minimal size of receive buffer used by TCP sockets. It is guaranteed to each TCP socket, even under moderate memory pressure. Default: 1 page. Default: initial size of receive buffer used by TCP sockets. This value overrides net.core.rmem_default used by other protocols. Default: 87380 bytes. This value results in window of 65535 with default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit less for default tcp_app_win. See below about these variables. . Max: maximal size of receive buffer allowed for automatically selected receiver buffers for TCP socket. This value does not override net.core.rmem_max. Calling setsockopt () with SO_RCVBUF disables automatic tuning of that socket's receive buffer size, in which case this value is ignored. Default: between 87380B and 6MB, depending on RAM size.

Recommended settin

Net.ipv4.tcp_rmem=8192 87380 16777216 net.ipv4.tcp_wmem=8192 65536 16777216. Recommended settings for many databases to improve network performance

18.

Parameters.

Net.nf_conntrack_max net.netfilter.nf_conntrack_max

Support system

CentOS 6

Parameter interpretation

Nf_conntrack_max-INTEGER Size of connection tracking table. Default value is nf_conntrack_buckets value * 4.

Recommended settin

Net.nf_conntrack_max=1xxxxxx net.netfilter.nf_conntrack_max=1xxxxxx

19.

Parameters.

Vm.dirty_background_bytes vm.dirty_expire_centisecs vm.dirty_ratio vm.dirty_writeback_centisecs

Support system

CentOS 6, 7

Parameter interpretation

=. Dirty_background_bytes. Contains the amount of dirty memory at which the background kernel flusher threads will start writeback. . Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only one of them may be specified at a time. When one sysctl is written it is immediately taken into account to evaluate the dirty memory limits and the other appears as 0 when read. . =. Dirty_background_ratio. Contains, as a percentage of total system memory, the number of pages at which the background kernel flusher threads will start writing out dirty data. . =. Dirty_bytes. Contains the amount of dirty memory at which a process generating disk writes will itself start writeback. . Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be specified at a time. When one sysctl is written it is immediately taken into account to evaluate the dirty memory limits and the other appears as 0 when read. . Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any value lower than this limit will be ignored and the old configuration will be retained. . =. Dirty_expire_centisecs. This tunable is used to define when dirty data is old enough to be eligible for writeout by the kernel flusher threads. It is expressed in 100'ths of a second. Data which has been dirty in-memory for longer than this interval will be written out next time a flusher thread wakes up. . =. Dirty_ratio. Contains, as a percentage of total system memory, the number of pages at which a process which is generating disk writes will itself start writing out dirty data. . =. Dirty_writeback_centisecs. The kernel flusher threads will periodically wake up and write `old' data out to disk. This tunable expresses the interval between those wakeups, in 100'ths of a second. . Setting this to zero disables periodic writeback altogether. . =

Recommended settin

Vm.dirty_background_bytes = 4096000000 vm.dirty_expire_centisecs = 6000 vm.dirty_ratio = 80 vm.dirty_writeback_centisecs = 50. Reduce the frequency of database processes scrubbing dirty pages. Dirty_background_bytes is set according to actual IOPS capacity and memory size.

20.

Parameters.

Vm.extra_free_kbytes

Support system

CentOS 6

Parameter interpretation

Extra_free_kbytes. This parameter tells the VM to keep extra free memory between the threshold where background reclaim (kswapd) kicks in, and the threshold where direct reclaim (by allocating processes) kicks in. . This is useful for workloads that require low latency memory allocations and have a bounded burstiness in memory allocations, for example a realtime application that receives and transmits network traffic (causing in-kernel memory allocations) with a maximum total message burst size of 200MB may need 200MB of extra free memory to avoid direct reclaim related latencies. . The goal is to get the background process to reclaim memory as much as possible, how much kbytes is recycled earlier than the user process, so that the user process can allocate memory quickly.

Recommended settin

Vm.extra_free_kbytes=4xxxxxx

21.

Parameters.

Vm.min_free_kbytes

Support system

CentOS 6, 7

Parameter interpretation

Min_free_kbytes:. This is used to force the Linux VM to keep a minimum number of kilobytes free. The VM uses this number to compute a watermark[WMARK _ MIN] value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size. . Some minimal amount of memory is needed to satisfy PF_MEMALLOC allocations; if you set this to lower than 1024KB, your system will become subtly broken, and prone to deadlock under high loads. . Setting this too high will OOM your machine instantly.

Recommended settin

Vm.min_free_kbytes = 2xxxxxx # vm.min_free_kbytes recommends allocating 1 G vm.min_free_kbytes for every 32 GB of memory. Prevent the system from not responding under high load and reduce the deadlock probability of memory allocation.

twenty-two。

Parameters.

Vm.mmap_min_addr

Support system

CentOS 6, 7

Parameter interpretation

Mmap_min_addr. This file indicates the amount of address space which a user process will be restricted from mmapping. Since kernel null dereference bugs could accidentally operate based on the information in the first couple of pages of memory userspace processes should not be allowed to write to them. By default this value is set to 0 and no protections will be enforced by the security module. Setting this value to something like 64k will allow the vast majority of applications to work correctly and provide defense in depth against future potential kernel bugs.

Recommended settin

Vm.mmap_min_addr=6xxxx. Prevent problems caused by hidden BUG in the kernel

23.

Parameters.

Vm.overcommit_memory vm.overcommit_ratio

Support system

CentOS 6, 7

Parameter interpretation

=. Overcommit_kbytes:. When overcommit_memory is set to 2, the committed address space is not permitted to exceed swap plus this amount of physical RAM. See below. . Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one of them may be specified at a time. Setting one disables the other (which then appears as 0 when read). . =. Overcommit_memory:. This value contains a flag that enables memory overcommitment. . When this flag is 0, the kernel attempts to estimate the amount of free memory left when userspace requests more memory. . When this flag is 1, the kernel pretends there is always enough memory until it actually runs out. . When this flag is 2, the kernel uses a "never overcommit" policy that attempts to prevent any overcommit of memory. Note that user_reserve_kbytes affects this policy. . This feature can be very useful because there are a lot of programs that malloc () huge amounts of memory "just-in-case" and don't use much of it. . The default value is 0. . See Documentation/vm/overcommit-accounting and security/commoncap.c::cap_vm_enough_memory () for more information. . =. Overcommit_ratio:. When overcommit_memory is set to 2, the committed address space is not permitted to exceed swap + this percentage of physical RAM. See above. . =

Recommended settin

Vm.overcommit_memory = 0 vm.overcommit_ratio = 90. Vm.overcommit_ratio may not be set when vm.overcommit_memory = 0

24.

Parameters.

Vm.swappiness

Support system

CentOS 6, 7

Parameter interpretation

Swappiness. This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase agressiveness, lower values decrease the amount of swap. . The default value is 60.

Recommended settin

Vm.swappiness = 0

25.

Parameters.

Vm.zone_reclaim_mode

Support system

CentOS 6, 7

Parameter interpretation

Zone_reclaim_mode:. Zone_reclaim_mode allows someone to set more or less aggressive approaches to reclaim memory when a zone runs out of memory. If it is set to zero then no zone reclaim occurs. Allocations will be satisfied from other zones / nodes in the system. . This is value ORed together of. 1 = Zone reclaim on 2 = Zone reclaim writes dirty pages out 4 = Zone reclaim swaps pages. Zone_reclaim_mode is disabled by default. For file servers or workloads that benefit from having their data cached, zone_reclaim_mode should be left disabled as the caching effect is likely to be more important than data locality. . Zone_reclaim may be enabled if it's known that the workload is partitioned such that each partition fits within a NUMA node and that accessing remote memory would cause a measurable performance reduction. The page allocator will then reclaim easily reusable pages (those page cache pages that are currently not used) before allocating off node pages. . Allowing zone reclaim to write out pages stops processes that are writing large amounts of data from dirtying pages on other nodes. Zone reclaim will write out dirty pages if a zone fills up and so effectively throttle the process. This may decrease the performance of a single process since it cannot use all of system memory to buffer the outgoing writes anymore but it preserve the memory on other nodes so that the performance of other processes running on other nodes will not be affected. . Allowing regular swap effectively restricts allocations to the local node unless explicitly overridden by memory policies or cpuset configurations.

Recommended settin

Vm.zone_reclaim_mode=0. Do not use NUMA

twenty-six。

Parameters.

Net.ipv4.ip_local_port_range

Support system

CentOS 6, 7

Parameter interpretation

Ip_local_port_range-2 INTEGERS Defines the local port range that is used by TCP and UDP to choose the local port. The first number is the first, the second the last local port number. The default values are 32768 and 61000 respectively. . Ip_local_reserved_ports-list of comma separated ranges Specify the ports which are reserved for known third-party applications. These ports will not be used by automatic port assignments (e.g. When calling connect () or bind () with port number 0). Explicit port allocation behavior is unchanged. . The format used for both input and output is a comma separated list of ranges (e.g. "1 and 2-4 and 10-10"). Writing to the file will clear all previously reserved ports and update the current list with the one given in the input. . Note that ip_local_port_range and ip_local_reserved_ports settings are independent and both are considered by the kernel when determining which ports are available for automatic port assignments. . You can reserve ports which are not in the current ip_local_port_range, e.g.:. $cat / proc/sys/net/ipv4/ip_local_port_range 32000 61000$ cat / proc/sys/net/ipv4/ip_local_reserved_ports 8080red9148. Although this is redundant. However such a setting is useful if later the port range is changed to a value that will include the reserved ports. . Default: Empty

Recommended settin

Net.ipv4.ip_local_port_range=40000 65535. Limit the range of local dynamic port allocation to prevent the occupation of listening ports.

twenty-seven。

Parameters.

Vm.nr_hugepages

Support system

CentOS 6, 7

Parameter interpretation

= nr_hugepages Change the minimum size of the hugepage pool. See Documentation/vm/hugetlbpage.txt = nr_overcommit_hugepages Change the maximum size of the hugepage pool. The maximum is nr_hugepages + nr_overcommit_hugepages. See Documentation/vm/hugetlbpage.txt. The output of "cat / proc/meminfo" will include lines like:. HugePages_Total: vvv HugePages_Free: www HugePages_Rsvd: xxx HugePages_Surp: yyy Hugepagesize: zzz kB. Where: HugePages_Total is the size of the pool of huge pages. HugePages_Free is the number of huge pages in the pool that are not yet allocated. HugePages_Rsvd is short for "reserved," and is the number of huge pages for which a commitment to allocate from the pool has been made, but no allocation has yet been made. Reserved huge pages guarantee that an application will be able to allocate a huge page from the pool of huge pages at fault time. HugePages_Surp is short for "surplus," and is the number of hugepages in the pool above the value in / proc/sys/vm/nr_hugepages. The maximum number of surplus hugepages is controlled by / proc/sys/vm/nr_overcommit_hugepages. . / proc/filesystems should also show a filesystem of type "hugetlbfs" configured in the kernel. . / proc/sys/vm/nr_hugepages indicates the current number of "persistent" hugepages in the kernel's hugepage pool. "Persistent" huge pages will be returned to the huge page pool when freed by a task. A user with root privileges can dynamically allocate more or free some persistent hugepages by increasing or decreasing the value of 'nr_hugepages'.

Recommended settin

If you want to use PostgreSQL's huge page, it is recommended that you set it. It is more than the shared memory required by the database.

twenty-eight。

Parameters.

Fs.nr_open

Support system

CentOS 6, 7

Parameter interpretation

Nr_open:This denotes the maximum number of file-handles a process canallocate. Default value is 1024 1024 (1048576) which should beenough for most machines. Actual limit depends on RLIMIT_NOFILEresource limit. It also affects the file handle limit of security/limits.conf. The open handle of a single process cannot be greater than fs.nr_open, so to increase the file handle limit, we must first increase the nr_open.

Recommended settin

For PostgreSQL databases with many objects (tables, views, indexes, sequences, materialized views, etc.), it is recommended to set it to 20 million, such as resource limits of concern to fs.nr_open=20480000 databases.

1. Through / etc/security/limits.conf setting, or ulimit setting

two。 View the settings of the current process through / proc/$pid/limits

#-core-limits the core file size (KB) #-memlock-max locked-in-memory address space (KB) #-nofile-max number of open files is recommended to be set to 10 million, but sysctl must be set, and the fs.nr_open must be greater than it, otherwise the system will not be able to log in. #-nproc-max number of processes the above four configurations are of great concern. # data-max data size (KB) #-fsize-maximum filesize (KB) #-rss-max resident set size (KB) #-stack-max stack size (KB) #-cpu-max CPU time (MIN) #-as-address space limit (KB) #-maxlogins-max number of logins for this user #-maxsyslogins-max number of logins on the System # priority-the priority to run user process with #-locks-max number of file locks the user can hold #-sigpending-max number of pending signals #-msgqueue-max memory used by POSIX message queues (bytes) #-nice-max nice priority allowed to raise to values: [- 20 19] #-IO scheduling rules of concern to rtprio-max realtime priority database

1. Currently, the IO scheduling strategies supported by the operating system include cfq, deadline, noop and so on.

/ kernel-doc-xxx/Documentation/block-kernel-doc-xxx/Documentation/block-Rafael-1 root root R74 Apr 8 16:33 00-INDEX-Rafael-1 root root 55006 Apr 8 16:33 biodoc.txt-Rafael-1 root root R18 Apr 8 16:33 capability.txt-Rafael-1 root root 12791 Apr 8 16:33 cfq-iosched.txt-Rafael Rafael-1 root root 13815 Apr 8 16:33 data-integrity.txt-Rafael-1 root root 2841 Apr 8 16:33 deadline-iosched.txt-RQM-1 root root 4713 Apr 8 16:33 ioprio.txt-Rafael-Rafael-1 root root 2535 Apr 8 16:33 null_blk.txt-Rafael-1 root root 4896 Apr 8 16:33 queue-sysfs.txt-Rafael Rafael- 1 root root 2075 Apr 8 16:33 request.txt-Rkashi-1 root root 3272 Apr 8 16:33 stat.txt-RQQ-1 root root 1414 Apr 8 16:33 switching-sched.txt-Rafael-1 root root 3916 Apr 8 16:33 writeback_cache_control.txt

If you want to learn more about the rules of these scheduling policies, you can check the WIKI or the kernel documentation.

You can see its scheduling strategy from here.

Cat / sys/block/vdb/queue/scheduler noop [deadline] cfq

Modify

Echo deadline > / sys/block/hda/queue/scheduler

Or modify the startup parameters

Grub.conf elevator=deadline

From many test results, the database uses deadline scheduling, the performance will be more stable.

On the operating system kernel parameters of DBA which are shared here, I hope that the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report