In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article shows you how to achieve Linux server high concurrency tuning, the content is concise and easy to understand, can definitely brighten your eyes, through the detailed introduction of this article, I hope you can get something.
It is well known that Linux does not support high concurrency well under the default parameters, which is mainly limited by the limit of the maximum number of open files per process, kernel TCP parameters and IO event allocation mechanism. Here are several adjustments to enable the Linux system to support a high concurrency environment.
Iptables correlation
If not, turn off or uninstall the iptables Firewall and prevent kernel from loading the iptables module. These modules affect concurrency performance.
Limit on the maximum number of open files per process
In a general release, a maximum of 1024 files can be opened by a single process, which is far from meeting the high concurrency requirement. The adjustment process is as follows:
Type in at the # prompt:
# ulimit-n 65535
Set the maximum number of files that can be opened for a single process started by root to 65535. If the system echoes something like "Operationnotpermitted", the above limit modification failed, actually because the value specified in exceeds the Linux system's soft or hard limit on the number of files that the user can open. Therefore, it is necessary to modify the soft and hard limits of the Linux system on the number of files to be opened.
The first step is to modify the limits.conf file and add:
# vim / etc/security/limits.conf* soft nofile 65535 * hard nofile 65535
Where the'* 'sign indicates to modify the limit for all users; soft or hard specifies whether to modify the soft limit or the hard limit; and 65536 specifies the new limit value you want to modify, that is, the maximum number of open files (please note that the soft limit value is less than or equal to the hard limit). Save the file after modification.
The second step is to modify the / etc/pam.d/login file by adding the following line to the file:
# vim / etc/pam.d/loginsessionrequired / lib/security/pam_limits.so
This tells Linux that after the user has finished logging in to the system, the pam_limits.so module should be called to set the maximum number of resources that the user can use (including the maximum number of files that the user can open), and the pam_limits.so module will read the configuration from the / etc/security/limits.conf file to set these limits. Save this file after modification.
The third step is to view the maximum number of open files at the Linux system level, using the following command:
# cat/proc/sys/fs/file-max32568
This indicates that the Linux system allows a maximum of 32568 files to be opened at the same time (that is, including the total number of files opened by all users), which is a Linux system-level hard limit, and all user-level open file limits should not exceed this value. Typically, this system-level hard limit is the best maximum simultaneous open file limit calculated by the Linux system at startup based on the system's hardware resources. If there is no special need, this limit should not be modified unless you want to set a value that exceeds this limit for the user-level open file limit. The way to modify this hard limit is to modify fs.file-max= 131072 in the / etc/sysctl.conf file
This is to let Linux forcibly set the hard limit of the number of files opened at the system level to 131072 after startup. Save this file after modification.
If you restart the system after completing the above steps, you can generally set the maximum number of files that the Linux system allows to open at the same time for a single process of a specified user to a specified value. If you use the ulimit-n command to check that the limit on the number of files a user can open after a reboot is still below the maximum value set in the above step, this may be because the use of the ulimit-n command in the user login script / etc/profile has limited the number of files that the user can open at the same time. Because when you modify the maximum number of files that users can open at the same time through ulimit-n, the newly modified value can only be less than or equal to the value set by the last ulimit-n, so it is impossible to increase this limit with this command. So, if there are the above problems, you can only open the / etc/profile script file to find out whether the use of ulimit-n limits the maximum number of files that the user can open at the same time, if found, delete this line of command, or change the value set to the appropriate value, and then save the file, and the user can log out and log back into the system.
Through the above steps, the system limit on the number of files opened is lifted for communication handlers that support highly concurrent TCP connection processing.
Kernel TCP parameters
Under the Linux system, after the TCP connection is disconnected, it will remain in the TIME_WAIT state for a certain period of time before the port is released. When there are too many concurrent requests, it will produce a large number of connections in TIME_WAIT state, and if it can not be disconnected in time, it will occupy a lot of port resources and server resources. At this point, we can optimize the kernel parameters of TCP to clean up the ports in the TIME_WAIT state in time.
The method described below is only effective for connections with a large number of TIME_WAIT states that result in system resource consumption, and if not, the effect may not be obvious. You can use the netstat command to check the connection status of the TIME_WAIT status. Enter the following combination command to view the current TCP connection status and the corresponding number of connections:
# netstat-n | awk'/ ^ tcp/ {+ + S [$NF]} END {for (an in S) print a, S [a]}'
This command outputs a result similar to the following:
LAST_ACK16SYN_RECV348ESTABLISHED70FIN_WAIT1229FIN_WAIT230CLOSING33TIME_WAIT18098
We only need to care about the number of TIME_WAIT, as you can see here, there are more than 18000 TIME_WAIT, which takes up more than 18000 ports. You should know that the number of ports is only 65535, one less than one, which will seriously affect the subsequent new connections. In this case, it is necessary to adjust the TCP kernel parameters of Linux to allow the system to release TIME_WAIT connections more quickly.
Edit the configuration file: / etc/sysctl.conf, in this file, add the following lines:
# vim / etc/sysctl.confnet.ipv4.tcp_syncookies= 1net.ipv4.tcpfilled twisted reuse = 1net.ipv4.tcpfilled twisted recycle= 1net.ipv4.tcpFinded timeout = 30
Enter the following command to make the kernel parameters take effect:
# sysctl-p
Simply explain the meaning of the above parameters:
Net.ipv4.tcp_syncookies= 1
Indicates that SYNCookies is enabled. When a SYN waiting queue overflow occurs, enable cookies to deal with it to prevent a small number of SYN attacks. The default is 0, which means it is turned off.
Net.ipv4.tcp_tw_reuse= 1
Indicates that reuse is enabled. Allow TIME-WAITsockets to be reused for new TCP connections. Default is 0, which means off.
Net.ipv4.tcp_tw_recycle= 1
Indicates that fast recycling of TIME-WAITsockets in TCP connection is enabled. Default is 0, which means disabled.
Net.ipv4.tcp_fin_timeout
Modify the default TIMEOUT time of the system.
After this adjustment, in addition to further increasing the load capacity of the server, it will also be able to defend against DoS, CC, and SYN attacks with low traffic.
In addition, if you have a large number of connections, we can further optimize the range of available ports of TCP to further improve the concurrency of the server. Still add the following configurations to the above parameter file:
Net.ipv4.tcp_keepalive_time= 1200net.ipv4.ipsides localizable portals range = 1024 65535net.ipv4.tcpspeak maxillofacial syntheses backlog = 8192 net.ipv4.tcpspeak maxillary twists buckets = 5000
It is recommended that these parameters be enabled only on servers with very high traffic, which will have a significant effect. In general, on servers with low traffic, there is no need to set these parameters.
Net.ipv4.tcp_keepalive_time= 1200
Indicates the frequency at which keepalive sends keepalive messages when TCP is activated. The default is 2 hours, which changes to 20 minutes.
Ip_local_port_range= 1024 65535
Represents the range of ports used for outbound connections. Very small by default, changed to 1024 to 65535.
Net.ipv4.tcp_max_syn_backlog= 8192
Indicates the length of the SYN queue, which defaults to 1024, and increases the queue length to 8192, which can accommodate more network connections waiting for connections.
Net.ipv4.tcp_max_tw_buckets= 5000
Indicates that the system maintains the maximum number of TIME_WAIT at the same time, and if this number is exceeded, the TIME_WAIT will be cleared immediately and a warning message will be printed. The default is 180000, which changes to 5000. This parameter controls the maximum number of TIME_WAIT as long as it is exceeded.
Description of other kernel TCP parameters
Net.ipv4.tcp_max_syn_backlog= 65535
The maximum value of recorded connection requests for which the client acknowledgement has not been received. For systems with 128 megabytes of memory, the default is 1024, and for systems with small memory, it is 128.
Net.core.netdev_max_backlog= 32768
The maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them.
Net.core.somaxconn= 32768
For example, the backlog of the listen function in the web application limits the net.core.somaxconn of our kernel parameters to 128 by default, while the NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so it is necessary to adjust this value.
Net.core.wmem_default= 8388608net.core.rmemmemdefault = 8388608net.core.rmemmemdefaultmax = 16777216 # maximum socket read buffer, optimized value for reference: 873200net.core.wmemmemdefault = 16777216 # maximum socket write buffer, for reference optimization value: 873200net.ipv4.tcptimestsmps = 0
The timestamp avoids the winding of serial numbers. A 1Gbps link is sure to encounter a sequence number that has been used before. Timestamps allow the kernel to accept such "abnormal" packets. It needs to be turned off here.
Net.ipv4.tcp_synack_retries= 2
To open a peer-to-peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. It is the second handshake in the so-called three-way handshake. This setting determines the number of SYN+ACK packets sent by the kernel before the connection is abandoned.
Net.ipv4.tcp_syn_retries= 2
The number of SYN packets sent before the kernel gives up establishing a connection.
# net.ipv4.tcp_tw_len= 1net.ipv4.tcptwin twisted reuse1
Turn on reuse. Allows TIME-WAITsockets to be reused for new TCP connections.
Net.ipv4.tcp_wmem= 8192 436600 873200
TCP writes buffer. The optimized value for reference is 8192 436600 873200.
Net.ipv4.tcp_rmem = 32768 436600 873200
TCP reads buffer. The optimized value for reference: 32768 436600 873200
Net.ipv4.tcp_mem= 94500000 91500000 92700000
There are also three values, which means:
Net.ipv4.tcp_mem [0]: below this value, TCP has no memory pressure. Net.ipv4.tcp_mem [1]: at this value, enter the memory pressure phase. Net.ipv4.tcp_mem [2]: above this value, TCP refuses to assign socket.
The above memory units are pages, not bytes. The optimized value for reference is: 7864321048576 1572864
Net.ipv4.tcp_max_orphans= 3276800
The maximum number of TCP sockets in the system that are not associated to any of the user file handles.
If this number is exceeded, the connection is immediately reset and a warning message is printed.
This restriction is only to prevent simple DoS attacks, and cannot be overly relied on or artificially reduced.
You should increase this value (if you increase memory).
Net.ipv4.tcp_fin_timeout= 30
If the socket is closed by the local request, this parameter determines how long it remains in the FIN-WAIT-2 state. The peer can make an error and never close the connection, or even crash unexpectedly. The default value is 60 seconds. 2.2 the usual value of the kernel is 180 seconds, you can press this setting, but keep in mind that even if your machine is a light WEB server, there is a risk of memory overflow due to a large number of dead sockets. FIN-WAIT-2 is less dangerous than FIN-WAIT-1 because it can only eat up to 1.5K of memory, but they last longer.
At the same time, it also involves a problem of TCP congestion algorithm. You can view the congestion algorithm control module provided by this machine with the following command:
Sysctlnet.ipv4.tcp_available_congestion_control
For the analysis of several algorithms, details can be referred to: the advantages and disadvantages of TCP congestion control algorithm, applicable environment, performance analysis, such as high delay can try hybla, medium delay can try htcp algorithm and so on.
If you want to set the TCP congestion algorithm to hybla
Net.ipv4.tcp_congestion_control=hybla
In addition, for kernel versions higher than 3.7.1, we can turn on tcp_fastopen:
Net.ipv4.tcp_fastopen= 3
IO event allocation mechanism
To enable highly concurrent TCP connections in Linux, it is necessary to verify that the application uses the appropriate network Imax O technology and Imax O event dispatch mechanism. The available Icano technologies are synchronous Icano, non-blocking synchronous Icano, and asynchronous Icano. In the case of high TCP concurrency, if you use synchronous Imax O, this will severely block the operation of the program unless a thread is created for the Imax O of each TCP connection. However, too many threads will cause huge overhead due to the scheduling of threads by the system. Therefore, it is not advisable to use synchronous Ihop O in the case of high TCP concurrency. In this case, you can consider using non-blocking synchronous Ihop O or asynchronous Ihop O. The techniques of non-blocking synchronous IBO include the use of select (), poll (), epoll and other mechanisms. The technology of asynchronous IBG O is to use AIO.
From the point of view of the select O event dispatch mechanism, using select () is inappropriate because it supports a limited number of concurrent connections (usually less than 1024). Poll () is also inappropriate if performance is taken into account. Although it can support a high number of TCP concurrency, because of its "polling" mechanism, when the number of concurrency is high, its running efficiency is quite low, and there may be uneven distribution of TCP O events, which leads to the phenomenon of "hunger" on some TCP connections. If you use epoll or AIO, there is no such problem. (the early implementation of AIO technology in the Linux kernel was achieved by creating a thread for each Imax O request in the kernel, and this implementation mechanism actually has serious performance problems when used in the case of high concurrent TCP connections. But in the latest Linux kernel, the implementation of AIO has been improved.
To sum up, when developing Linux applications that support high concurrent TCP connections, we should try our best to use epoll or AIO technology to achieve Icano control on concurrent TCP connections, which will provide an effective guarantee for improving the program's support for high concurrent TCP connections.
What is Linux system Linux is a free-to-use and free-spread UNIX-like operating system, is a POSIX-based multi-user, multi-task, multi-threaded and multi-CPU operating system, using Linux can run major Unix tools, applications and network protocols.
The above is how to achieve high concurrency tuning of Linux server. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.