In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly analyzes the relevant knowledge of Linux kernel tuning, the content is detailed and easy to understand, the operation details are reasonable, and has a certain reference value. If you are interested, you might as well follow the editor to take a look, and follow the editor to learn more about "Linux kernel tuning".
I. Kernel file system
After the operating system is running, we still have a lot of work to interact with the kernel, so how to realize the interaction between the user and the Linux kernel? this uses kernel parameters. Linux provides a virtual file system such as / proc, through which we can complete the communication between the linux kernel space and users. In the / proc file system, we can read and write virtual files as a means of communicating with entities in the kernel. But unlike ordinary files, the contents of these virtual files are created dynamically. Also, although these files return a lot of information when viewed with the view command, the size of the file itself is displayed as 0 bytes because these are files that reside in memory.
For ease of viewing and use, these files are usually classified and stored in different directories or even subdirectories according to their relevance, such as:
/ proc/sys/net is a kernel parameter related to the network
/ proc/sys/kernel is a kernel parameter related to the kernel
/ proc/sys/vm is a memory-related kernel parameter
/ proc/sys/fs is a kernel parameter related to the file system
In addition, the driver information of hardware devices is mainly stored under / sys, which can be optimized by / sys/block.
The following is a classified description of the common kernel parameter optimization strategies and techniques in linux servers.
Second, how to optimize kernel parameters
To observe the configuration of the current kernel parameters, such as to view a kernel parameter file in the / proc/sys/ directory, you can use the cat command to see the contents. Here, we want to enable linux proxy forwarding. You can directly modify the file / proc/sys/net/ipv4/ip_forward under / proc corresponding to the kernel parameter ip_forward. View the contents of the ip_forward file with the following command:
[root@liangxu]
The default value of the virtual file is 0 to disable ip forwarding. If you change it to 1, you can enable ip forwarding. The modification command is as follows:
[root@liangxu]
The modification takes effect immediately, that is, the kernel has turned on ip forwarding.
As you can see from the above, kernel parameters can be easily modified with echo and cat, which can be used on almost all operating systems with / proc file systems, but there are two obvious problems.
The echo command cannot check the consistency of parameters.
After rebooting the system, all kernel changes are lost.
To solve the above problem, operators should use sysctl to modify kernel parameters. Sysctl uses the file name in the / proc/sys directory tree as a parameter. For example, the msgmnb kernel parameters are saved in / proc/sys/kernel/msgmnb, which we can use cat to read and echo to modify:
[root@liangxul] 16384 [root@liangxu] [root@liangxu] 32768
However, using echo is error-prone, so we recommend using the sysctl command because it checks data consistency before modification, such as:
[root@liangxu] kernel.msgmnb = 32768 [root@liangxu] kernel.msgmnb = 40960 [root@liangxu] kernel.msgmnb = 40960
As you can see, when sysctl is followed by kernel parameters, you can check the current value of the kernel parameters. To modify this kernel parameter, execute "sysctl-w" followed by kernel parameters and new assignments, which seems perfect, but there is still a problem, that is, the above sysctl operations will be lost after the system is rebooted. Then, if you want to make permanent changes, you should edit the / etc/sysctl.conf file. Add the following to this file:
Sysctl kernel.msgmnb = 40960
In this way, the system will automatically read the / etc/sysctl.conf file the next time you restart the system. Of course, you can also use the following command to make the configuration take effect immediately without rebooting the system:
[root@liangxu] III. Optimization of network kernel parameters
Network kernel parameter optimization is a link that must be optimized when Linux is a web server. There are many network kernel parameters on linux. Reasonable kernel parameter settings can maximize the network connection performance of web server. Below, we summarize some of the most common kernel parameters in web server, the meaning of these parameters and optimization rules will do an in-depth and detailed introduction.
1 、 / proc/sys/net/ipv4/tcp_syn_retries
This parameter indicates that for a new connection, how many SYN connection requests must be sent by the kernel before deciding to abandon. This value should not be greater than 255. the default value is 5, and it is recommended to set it to 2.
Setting method:
Echo 2 > / proc/sys/net/ipv4/tcp_syn_retries
2 、 / proc/sys/net/ipv4/tcp_keepalive_time
This parameter indicates how often TCP sends keepalive messages when keepalive is enabled. The default is 7200 seconds, and it is recommended to change it to 300 seconds.
The recommended setting for a normal web server is:
Echo 300 > / proc/sys/net/ipv4/tcp_keepalive_time
3 、 / proc/sys/net/ipv4/tcp_orphan_retries
This parameter indicates the number of retries before orphan socket is abandoned. It is recommended to reduce the number of heavily loaded web servers.
The normal web server is recommended to be set to "echo 1 > / proc/sys/net/ipv4/tcp_orphan_retries"
4 、 / proc/sys/net/ipv4/tcp_syncookies
This parameter means that SYN Cookies is enabled. When there is a SYN waiting queue overflow, enable cookies to deal with it to prevent a small number of SYN attacks. The default is 0, which means it is turned off. It is recommended to set it to "echo 1 > / proc/sys/net/ipv4/ tcp_syncookies" for ordinary web servers.
SYN attack is mentioned here, which is one of the most popular ways of DoS (denial of service) and DDoS (distributed denial of service). This is a way to take advantage of TCP protocol defects to send a large number of fake TCP connection requests, often using fake IP or IP number segments to send a large number of requests to connect to the first handshake package (SYN packet), and the attacked server responds to the second handshake packet (SYN+ACK packet), because the other party is a fake IP. The other party will never receive the bag and will not respond to the third handshake bag. The "semi-connection" that causes the attacked server to maintain a large number of SYN_RECV status, and retries the default response to the second handshake packet 5 times, until the TCP waiting for the connection queue is full, and the resources are exhausted (CPU is full or out of memory), preventing normal business requests from coming in.
For SYN attacks, you can do this by enabling SYN Cookie, setting the maximum length of the SYN queue, and setting the maximum number of SYN+ACK retries. Setting tcp_syncookies to 1 means that SYN Cookie is enabled.
The role of SYN Cookie is to relieve the pressure on server resources. Before enabling, the server allocates storage space immediately after receiving the SYN packet and randomizes a number as the SYN number to send the SYN+ACK packet. Then save the status information of the connection and wait for the client to confirm. After SYN Cookie is enabled, the server no longer allocates storage space immediately and uses a time-seed-based random number algorithm to set a SYN number instead of a completely random SYN number. After sending the SYN+ACK confirmation message, emptying the resource does not save any state information. Until the server receives the final ACK packet from the client. At the same time, the Cookie verification algorithm is used to identify whether it matches the sequence number of the sent SYN+ACK message, and the matching is done by shaking hands, and if it fails, it is discarded.
Set the maximum length of SYN queue and the maximum number of SYN+ACK retries, which are described below.
5 、 / proc/sys/net/ipv4/tcp_max_syn_backlog
This parameter sets the maximum length of the SYN queue, which defaults to 1024, and increases the queue length to 8192 to accommodate more network connections waiting for connections. The recommended setting for a normal web server is:
"echo 8192 > / proc/sys/net/ipv4/tcp_max_syn_backlog"
Tcp_max_syn_backlog uses the memory resources of the server in exchange for a larger waiting queue length, so that attack data packets do not fill all connections, resulting in normal users unable to complete the handshake. So it will consume part of the memory resources of the system.
6 、 / proc/sys/net/ipv4/tcp_synack_retries
Net.ipv4.tcp_synack_retries is used to reduce the number of server SYN+ACK message retries (default is 5) and release waiting resources as soon as possible.
For the remote connection request SYN, the kernel sends a SYN+ACK data packet to acknowledge receipt of the last SYN connection request packet. This is the second step in the so-called three-way handshake (threeway handshake) mechanism. This parameter determines the number of SYN+ACK sent by the kernel before the connection is abandoned.
The above three parameters correspond to the harm of SYN attack one by one, and are completely the right remedy to the case. However, these measures are also double-edged swords. Too much setting may consume more memory resources of the server, and even affect normal users to establish TCP connections. Therefore, it is necessary to evaluate the hardware resources of the server and carefully set the attack strength.
7 、 / proc/sys/net/ipv4/ tcp_tw_recycle
This parameter means that fast recycling of TIME-WAIT sockets in TCP connection is enabled. Default is 0, which means off. It is recommended to set it to "echo 1 > / proc/sys/net/ipv4/ tcp_tw_recycle" for ordinary web servers.
Here we focus on TIME-WAIT. This TIME_WAIT means that during the four waves of TIME_WAIT, the party initiating the connection is first called to close the connection. After the last ACK is sent, it will enter the state of TIME_WAIT. If there are too many TCP connections, what is the harm? let's explain:
On TCP servers with high concurrency and short connections, when the server processes the request, it will immediately close the connection normally. At this time, a large number of socket will be in the TIME_WAIT state. If the concurrency of the client continues to increase, some clients will show that they cannot connect to the server. Why? because the server has run out of resources. The resources here mainly refer to the server temporary connection port.
There are so many ports available on the server, which are really very few, excluding the system and other services, and there are even fewer left. The high concurrency short connection scenario allows the server to occupy a large number of ports at the same time in a short period of time.
For example, when you visit a web page, after one second of http short connection processing is completed, after closing the connection, the port used by the service will stay in the TIME_WAIT state for a few minutes, while this port cannot be occupied by other HTTP requests these minutes. . If you monitor the utilization of the server at this time, you will find that the time that the server is doing business and the port (resource) is suspended and cannot be used is 1: hundreds, which leads to a serious waste of server resources.
Therefore, when optimizing the web server, you should pay special attention to the TCP status TIME_WAIT. To view the TIME_WAIT status of the system, you can use the following command combination:
Netstat-nat | awk'{print $6}'| sort | uniq-c | sort-rnnetstat-n | awk'/ ^ tcp/ {print $NF}'| sort | uniq-c | sort-rnnetstat-n | awk'/ ^ tcp/ {+ + S [$NF]}; END {for (an in S) print a, S [a]} 'netstat-n | awk' / ^ tcp/ {+ state [$NF]}; END {for (key in state) print key, "\ t", state [key]} 'netstat-n | awk' / ^ awk {+ tcp/ [$awk]} END {for (k in arr) print k, "\ t", arr [k]} 'netstat-ant | awk' {print $NF}'| grep-v'[amurz]'| sort | uniq-c
By turning on the fast recycling of TIME-WAIT sockets, the burden on the web server can be greatly reduced, so the optimization of TIME-WAIT in the web server must be done.
8 、 / proc/sys/net/ipv4/tcp_tw_reuse
This parameter indicates that reuse is turned on. Allow TIME-WAIT sockets to be reused for new TCP connections because it is much cheaper to reuse connections than to re-establish new connections. This value defaults to 0, which means that the resuse is off; while the resuse is enabled, the fast recycling recycle (that is, tcp_tw_recycle is 1) must be enabled at the same time!
The normal web server is recommended to be set to "echo 1 > / proc/sys/net/ipv4/ tcp_tw_reuse".
9 、 / proc/sys/net/ipv4/tcp_fin_timeout
This parameter represents the minimum time that a connection in the TIME_WAIT state must wait before being recycled. Making it smaller can speed up recycling. The normal web server is recommended to be set to "echo 15 > / proc/sys/net/ipv4/tcp_fin_timeout".
This parameter and the above two parameters are the most common configuration combinations for TIME-WAIT optimization.
10 、 / proc/sys/net/ipv4/tcp_keepalive_probes
This parameter is used to reduce the number of probes before timeout. It is recommended to set it to "echo 5 > / proc/sys/net/ipv4/ tcp_keepalive_probes" for ordinary web servers.
11 、 / proc/sys/net/core/netdev_max_backlog
This parameter is used to set the maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them. The default is 1000. Modify this parameter to optimize the receiving queue of network devices. It is recommended to set it to "echo 3000 > / proc/sys/net/ipv4/ netdev_max_backlog".
12 、 / proc/sys/net/core/rmem_max 、 wmem_max
These two parameters increase the maximum buffer size of TCP, where:
Rmem_max: represents the maximum size of the receive socket buffer in bytes.
Wmem_max: represents the maximum size of the send socket buffer in bytes.
As the basic optimization of network parameters, it is recommended to set as follows:
Echo 16777216 > / proc/sys/net/core/rmem_maxecho 16777216 > / proc/sys/net/core/wmem_max
13 、 / proc/sys/net/ipv4/tcp_rmem 、 tcp_wmem
These two parameters can improve the ability of the Linux kernel to automatically optimize the socket buffer, where:
Tcp_rmem: used to configure the size of the read buffer. The first value is the minimum, the second is the default value, and the third is the maximum.
Tcp_wmem: used to configure the size of the write buffer. The first value is the minimum, the second is the default value, and the third is the maximum.
As the basic optimization of network parameters, it is recommended to set as follows:
Echo "4096 87380 16777216" > / proc/sys/net/ipv4/tcp_rmemecho "4096 65536 16777216" > / proc/sys/net/ipv4/tcp_rmem
14 、 / proc/sys/net/core/somaxconn
This parameter is used to set the upper limit of backlog for socket snooping (listen). What is backlog? Backlog is the listening queue of socket, when a request (request) has not been processed or established, it will enter the backlog. On the other hand, socket server can process all requests in backlog at once, and the processed requests are no longer in the listening queue. When server processes requests so slowly that the listening queue is filled, new requests are rejected. The default is 128. As the basic optimization of network parameters, it is recommended to set as follows:
Echo 4096 > / proc/sys/net/core/somaxconn IV. Optimization of system kernel parameters
1 、 / proc/sys/kernel/panic
This parameter is used to set the amount of time (in seconds) that the kernel waits before rebooting if a "kernel critical error (kernel panic)" occurs. The default value is 0, which means that reboot will be disabled in the event of a serious kernel error. It is recommended to set it to 1, that is, to restart automatically after a kernel failure of 1 second.
The setting method is: echo 1 > / proc/sys/kernel/panic
2 、 / proc/sys/kernel/pid_max
This parameter is used to set the maximum number of processes under Linux. The default is 32768, which is normally sufficient. When we run heavy tasks, it will not be enough, resulting in an error that memory cannot be allocated, so you can increase it as appropriate:
[root@localhost] 32768 [root@localhost ~] [root@localhost ~] 196608
3 、 / proc/sys/kernel/ctrl-alt-del
The file has a binary value that controls how the system reacts when it receives a combination of ctrl+alt+delete keys. These two values are:
A value of 0, which captures the ctrl+alt+delete and sends it to the init program; this allows the system to shut down and restart safely, just like typing the shutdown command.
A value of 1, indicating that ctrl+alt+delete is not captured.
It is recommended to set it to 1 to prevent accidental pressing of ctrl+alt+delete from causing the system to restart abnormally.
4 、 / proc/sys/kernel/core_pattern
This parameter is used to set the location or file name of the core file. If there is only the file name, it will be saved in the directory where the application is running. The configuration method is: echo "core.%e.%p" > / proc/sys/kernel/core_pattern, where% e represents the program name and% p represents the process id.
5. optimization of memory kernel parameters
1 、 / proc/sys/vm/dirty_background_ratio
This parameter specifies that when the file system cache dirty data reaches how many percent of the system memory (such as 10%), it will trigger background write-back processes such as pdflush/flush/kdmflush to run, and flush certain cached dirty pages to disk asynchronously; for example, if the server memory is 32 GB, then 3.2 GB of memory can be used to cache dirty data, and if it exceeds 3.2 GB, the pdflush/flush/kdmflush process will clean it.
Default: 10
2 、 / proc/sys/vm/dirty_ratio
This parameter specifies that when the file system cache dirty data reaches how many percent of the system memory (for example, 15%), the system has to start processing cache dirty pages (because there are already a large number of dirty data, some dirty data needs to be brushed to disk in order to avoid data loss). If this setting is triggered, new IO requests will be blocked until dirty data is written to disk. This is an important cause of IO stutter, but it is also a protection mechanism to ensure that there is no excessive dirty data in memory.
Note the difference between this parameter and the dirty_background_ratio parameter. Dirty_background_ratio is a soft limit for the percentage of dirty data, while dirty_ratio is a hard limit for the percentage of dirty data. In parameter settings, dirty_ratio must be greater than or equal to the value of dirty_background_ratio.
In scenarios where disk writes are infrequent, increasing this value appropriately can greatly improve the write performance of the file system. However, if it is a continuous, constant writing situation, its value should be reduced.
Default: 20
3 、 / proc/sys/vm/dirty_expire_centisecs
This file indicates that if dirty data stays in memory for longer than this value, the pdflush process will write the data back to disk next time. After this parameter declares that the data in the Linux kernel write buffer is "old", the pdflush process starts to consider writing to disk. The unit is one hundred seconds. The default is 3000, that is, 30 seconds of data will be refreshed even if it is old. For particularly overloaded write operations, it is good to shrink this value appropriately, but not too much, because too much reduction will also cause the IO to increase too quickly.
Default setting: 3000 (1amp 100 seconds)
4 、 / proc/sys/vm/dirty_writeback_centisecs
This parameter controls the running interval of the kernel's dirty data refresh process pdflush. The unit is one hundred seconds. The default value is 500, which is 5 seconds. If our system is a continuous write operation, it is recommended to lower this value, so that the peak write operation can be flattened into multiple write operations. on the contrary, if our system is a short-term spike write operation, and the write data is small (dozens of M / times) and the memory is rich, then this value should be increased.
Default setting: 500 (1 stroke 100 seconds)
5 、 / proc/sys/vm/vfs_cache_pressure
This file indicates the kernel's tendency to recycle memory for directory and inode cache; the default value of 100 indicates that the kernel will keep directory and inode cache at a reasonable percentage based on pagecache and swapcache; lowering this value below 100 will cause the kernel to tend to retain directory and inode cache; by more than 100, which will cause the kernel to tend to recycle directory and inode cache. This parameter does not need to be adjusted in general, and is recommended only in extreme scenarios.
Default: 100
6 、 / proc/sys/vm/min_free_kbytes
This file represents the minimum amount of free memory (Kbytes) that forces Linux VM to reserve.
Default: 90112 (88m physical memory, centos7 version)
7 、 / proc/sys/vm/nr_pdflush_threads
This file indicates the number of pdflush processes currently running, and the kernel will automatically add more pdflush processes when the load is high.
8 、 / proc/sys/vm/overcommit_memory
This file specifies the kernel's policy for memory allocation, which can be 0, 1, 2.
0, which means that the kernel will check whether there is enough memory available for the application process; if there is enough memory available, the memory request is allowed; otherwise, the memory request fails and the error is returned to the application process.
1, which means that the kernel allows all physical memory to be allocated, regardless of the current memory state.
2, indicating that the kernel allows more than the sum of all physical memory and swap space to be allocated.
This parameter needs to be optimized according to the specific application that the server is running. For example, if the server is running a redis in-memory database, it is recommended to set it to 1. If it is a web application, it is recommended to keep it by default.
9 、 / proc/sys/vm/panic_on_oom
This parameter indicates whether the kernel panic directly when there is insufficient memory. The default is 0, which means that when the memory is exhausted, the kernel will trigger OOM killer to kill the most memory-consuming processes. If it is set to 1, the system will panic during OOM. Linux Kernel panic, as its name means, means that linux kernel does not know how to go, and it will print out as much information as it can get at this time. Sometimes you need to set this value to 1 in order to prevent the system from automatically kill the process.
Here is a brief introduction to the OOM killer mechanism:
When the system physical memory and swap space are used up, if there is a process to request memory, the kernel will trigger OOM killer, and when OOM occurs, the behavior of the system will be based on the value of "cat / proc/sys/vm/panic_on_oom".
This value of 0 means that the system executes OOM Killer during OOM, so how does the system choose a process to kill? the standard of selection is to kill the minimum number of processes and release the maximum amount of memory at the same time. To achieve this goal, Kernel maintains a copy of oom_score data that contains the oom_score of each process. We can look at the oom_ score value of each process in / proc/$ {pid} / oom_score (the higher the value, the easier it is to be killed), the final oom_score value will refer to / proc//oom_adj, the oom_adj values range from-17 to 15, and a value equal to-17:00 means that the process will not be kill by "OOM killer" at any time.
Therefore, we say that there are two factors that affect the OOM killer mechanism:
/ proc/ / oom_score: the current kill score of the pid process, the higher the score, the more likely it is to be kill. This value is based on the result of the oom_adj operation and is the main reference for oom_killer.
/ proc/ [oom_adj]: indicates that the weight of the pid process killed by oom killer is between [- 17]. The higher the weight, the more likely it is to be selected by oom killer, and-17 means that it is prohibited from being dropped by kill.
10 、 / proc/sys/vm/swappiness
This parameter indicates the probability of using swap partitions. Swappiness=0 means to maximize the use of physical memory, followed by swap space, and swappiness=100 means to actively use swap partitions and move the data in memory to swap space in time. The basic default setting of linux is 60, which means that the use of swap partitions begins to occur when your physical memory is used to 100-60%. This value needs to be set small enough on some in-memory database servers, such as redis and hbase machines, which should be set between 0 and 10 to maximize the use of physical memory.
Default: 60
VI. Optimization of file system kernel parameters
1 、 / proc/sys/fs/file-max
This parameter specifies the maximum number of file handles that can be allocated. If users get an error message stating that they cannot open more files because the maximum number of open files has been reached, they may need to increase this value.
The setting method is: echo "10485750" > / proc/sys/fs/file-max
2 、 / proc/sys/fs/inotify/max_user_watches
An important configuration of Rsync+inotify-tools real-time data synchronization under Linux is to set the max_user_ watches value of Inotify. If it is not set, errors will occur when a large number of files are encountered.
You can increase this value by:
Echo "8192000" > / proc/sys/fs/inotify/max_user_watches
So much about the basic optimization parameters of the Linux system kernel, for a web server, these are the basic optimizations that must be done. As for more optimization, specific problems should be dealt with according to the characteristics of the application environment.
Finally, to make the set kernel parameters permanent, you need to modify the / etc/sysctl.conf file. First check the sysctl.conf file, if it already contains a parameter that needs to be modified, modify the value of the parameter, and if there is no parameter that needs to be modified, you can add the parameter in the sysctl.conf file.
The online environment recommends that all kernel parameters to be set be added to the / etc/sysctl.conf file.
The following is a reference for our online web server configuration, which can support 100 million requests per day (server hardware is 16 cores of 32GB memory):
Net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_fin_timeout = 10net.ipv4.tcp_max_syn_backlog = 20000net.core.netdev_max_backlog = 32768net.core.somaxconn = 32768net.core.wmem_default = 8388608net.core.rmem_ Default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.ipv4.tcp_timestamps = 0net.ipv4.tcp_synack_retries = 2net.ipv4.tcp_syn_retries = 2net.ipv4.tcp_syncookies = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_mem = 94500000 915000000 927000000net.ipv4.tcp_max_orphans = 3276800net.ipv4.tcp_fin_timeout = 10net.ipv4.tcp_keepalive_time = 120net.ipv4.ipp _ Local_port_range = 1024 65535net.ipv4.tcp_max_tw_buckets = 80000net.ipv4.tcp_keepalive_time = 120net.ipv4.tcp_keepalive_intvl = 15net.ipv4.tcp_keepalive_probes = 5net.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_tw_recycle = 1net.ipv4 .tcp _ fin_timeout = 10net.ipv4.tcp_max_syn_backlog = 20000net.core.netdev_max_backlog = 32768net.core.somaxconn = 32768net.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.ipv4.tcp_timestamps = 0net.ipv4.tcp_synack_retries = 2net.ipv4.tcp_syn_retries = 2net.ipv4.tcp_mem = 94500000 915000000 927000000net.ipv4.tcp_max_orphans = 3276800net.ipv4.ip _ local_port_range = 1024 65535net.ipv4.tcp_max_tw_buckets = 500000net.ipv4.tcp_keepalive_time = 60net.ipv4.tcp_keepalive_intvl = 15net.ipv4.tcp_keepalive_probes = 5net.nf_conntrack_max = 2097152
This kernel parameter optimization example can be used as an optimization standard for a web system, but it is not guaranteed to adapt to any environment.
This is the end of the introduction on "how Linux kernel tuning is". More related content can be searched for previous articles, hoping to help you answer questions and questions, please support the website!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.