In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains the "Linux high concurrency over the pit and performance case analysis", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn "Linux high concurrency over the pit and performance case analysis"!
Preface
Linux operating system is now the preferred operating system for servers. Under the default system parameters of Linux, Linux is not very supportive for high concurrency. The editor has been engaged in application development under Linux for many years. There are several items listed below about the high concurrency under the Linux system, the pits that the editor has stepped on, and how to solve the pits he stepped on.
Analysis and solution of Too many open files problems in the running process of Linux Application
The reason for this prompt is that the number of file socket connections opened by the program exceeds the system setting value.
View the maximum number of files allowed to open per user
Ulimit-a
Where open files (- n) 1024 indicates that the maximum number of files allowed to open per user is 1024
The maximum number of current system file handles, which is for viewing only and cannot be modified.
Cat / proc/sys/fs/file-max
View the limited number of open files for a process
Cat / proc/10446 (pid) / limits
Set open files numerical method
Ulimit-n 65535
This setting method reverts to the default value after restart.
Permanent setting method:
Vim / etc/security/limits.conf
Join at the end
* soft nofile 65535 * hard nofile 65535
To take effect, you need to restart the system.
After this modification, the problem was solved effectively.
Analysis and solution of excessive time_wait under High concurrency of Linux
The phenomenon is that in a high concurrency scenario, the server runs the application stutter.
Troubleshooting method: view the server configuration:
Netstat-ant | awk'/ ^ tcp/ {+ + S [$NF]} END {for (an in S) print (AME S [a])}'
Found that there are too many time_wait, there are tens of thousands, it should be a large number of socket in the TIME_WAIT state. If the concurrency of the client continues to be high, some clients will show that they are unable to connect.
TCP connection status description:
CLOSED: no connection is active or in progress LISTEN: server is waiting for incoming call SYN_RECV: a connection request has arrived, waiting for confirmation SYN_SENT: application has started Open a connection ESTABLISHED: normal data transfer status FIN_WAIT1: application says it has completed FIN_WAIT2: the other side has agreed to release ITMED_WAIT: wait for all packets to die CLOSING: both sides try to close TIME_WAIT at the same time: the other side has initialized a release LAST_ACK: wait for all packets to die
Excessive harm of TIME_WAIT
When the network condition is bad, if the active party does not have time _ WAIT to wait, after closing the previous connection, the active party and the passive party establish a new TCP connection. In this case, the passive party retransmits or delays the FIN packet, which will directly affect the new TCP connection.
Similarly, the network condition is not good and there is no TIME_WAIT waiting, and there is no new connection after closing the connection. When the passive party receives the FIN packet of retransmission or delay, it will return a RST packet to the passive party, which may affect other service connections of the passive party.
To solve the problem of excessive TIME_WAIT, the answers are as follows:
Edit the kernel file / etc/sysctl.conf and add the following:
Net.ipv4.tcp_syncookies = 1 # means SYN Cookies is enabled. When a SYN waiting queue overflow occurs, enable cookies to prevent a small number of SYN attacks. Default is 0, which means disabled; net.ipv4.tcp_tw_reuse = 1 # means reuse is enabled. Allow TIME-WAIT sockets to be reused for new TCP connections. Default: 0: disable; net.ipv4.tcp_tw_recycle = 1 #: enable fast recycling of TIME-WAIT sockets in TCP connections. Default: 0: off. Net.ipv4.tcp_fin_timeout = 3 modification default TIMEOUT time
Then execute / sbin/sysctl-p to make the parameter take effect.
Simply put, it is to turn on the TIMEWAIT reuse and fast recycling of the system.
More performance optimizations for Linux
If your system has a large number of connections, and if the performance of the above configuration tuning is not satisfactory, you can further optimize the available port range of TCP to further improve the concurrency ability of the server. Still in the / etc/sysctl.conf file, add the following configurations:
Vi / etc/sysctl.conf# indicates how often TCP sends keepalive messages when keepalive is activated. The default is 2 hours, which changes to 20 minutes. Net.ipv4.tcp_keepalive_time = 1200 # indicates the range of ports used for outbound connections. Small by default: 32768 to 61000, changed to 1024 to 65000. Net.ipv4.ip_local_port_range = 1024 65000 # indicates the length of the SYN queue, which defaults to 1024, and increases the queue length to 8192, which can accommodate more network connections waiting for connections. Net.ipv4.tcp_max_syn_backlog = 8192 # indicates that the system maintains the maximum number of TIME_WAIT sockets at the same time, and if this number is exceeded, the TIME_WAIT socket is immediately cleared and a warning message is printed. The default is 180000, which changes to 5000. For servers such as Apache, Nginx, and so on, the parameters in the first few lines can well reduce the number of TIME_WAIT sockets, but it has little effect on Squid. This parameter controls the maximum number of TIME_WAIT sockets to prevent the Squid server from being dragged to death by a large number of TIME_WAIT sockets. Net.ipv4.tcp_max_tw_buckets = 5000
Linux kernel optimization instructions for more parameters
Vim / etc/sysctl.conf
1. Net.ipv4.tcp_max_syn_backlog = 65536
The maximum value of recorded connection requests for which the client acknowledgement has not been received. The default value is 1024 for systems with more than 128m of memory and 128m for systems with less than 128m of memory.
SYN Flood attacks make use of the defect of TCP protocol to spread handshakes, forge fake source IP addresses and send a large number of TCP-SYN semi-open connections to the target system, which eventually causes the target system to run out of Socket queue resources and can not accept new connections. In order to cope with this kind of attack, multi-connection queue processing is widely used in modern Unix systems to buffer (rather than solve) this attack, using a basic queue to handle normal fully connected applications (Connect () and Accept ()), and using another queue to store semi-open connections separately.
When combined with some other kernel measures (such as Syn-Cookies/Caches), this dual-queue processing method can effectively mitigate small-scale SYN Flood attacks (it has been proved that
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.