In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you the example analysis of Rhel5.x and CentOS5.x kernel optimization, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's learn about it!
System optimization items:
Kernel.sysrq = 0
# use the sysrq key combination to understand the current operation of the system and set it to 0 for security reasons.
Kernel.core_uses_pid = 1
# controls whether to add pid as an extension to the file name of the core file
Kernel.msgmnb = 65536
# size limit of each message queue in bytes
Kernel.msgmni = 16
# limit the maximum number of message queues in the whole system, which can be increased as needed.
Kernel.msgmax = 65536
# maximum size of each message
Kernel.shmmax = 68719476736
# limit on the size of available shared memory segments (in bytes)
Kernel.shmall = 4294967296
# all memory sizes (in pages, 1 page = 4Kb)
Kernel.shmmni = 4096
# controls the total number of shared memory segments. The current parameter value is 4096.
Kernel.sem = 250 32000 100 128
Or kernel.sem = 5010 641280 5010 128
# SEMMSL (maximum number of semaphores per user), SEMMNS (maximum number of system semaphores), SEMOPM (operands per semop system call), SEMMNI (maximum number of system semaphore sets)
Fs.aio-max-nr = 65536 or take (1048576) (3145728) value
# Asynchronous IO O is supported at the system level, and a larger value will be used when the system carries out a large number of consecutive iUnites
Fs.aio-max-size = 131072
# maximum size of asynchronous IO
Fs.file-max = 65536
# indicates the maximum number of file handles
Net.core.wmem_default = 8388608
# reserved memory defaults for sending buffers for TCP socket (in bytes)
Net.core.wmem_max = 16777216
# maximum memory reserved for send buffering for TCP socket (in bytes)
Net.core.rmem_default = 8388608
# memory defaults reserved for TCP socket to receive buffers (in bytes)
Net.core.rmem_max = 16777216
# maximum memory reserved for receiving buffers for TCP socket (in bytes)
Net.core.somaxconn = 262144
# default parameter of listen (function), limit on the maximum number of pending requests
Network Optimization items:
Net.ipv4.ip_forward = 0
# disable packet filtering and forwarding
Net.ipv4.tcp_syncookies = 1
# enable SYN COOKIES function
Net.ipv4.conf.default.rp_filter = 1
# enable source routing audit
Net.ipv4.conf.default.accept_source_route = 0
# disable all IP source routes
Net.ipv4.route.gc_timeout = 100
# Route cache refresh frequency, how long after one route fails to jump to another, the default is 300
Net.ipv4.ip_local_port_range = 1024 65000
# the range of external connection ports is very small by default: 32768 to 61000, changed to 1024 to 65000.
Net.ipv4.tcp_max_tw_buckets = 6000
# indicates that the system maintains the maximum number of TIME_WAIT sockets at the same time. If this number is exceeded, the TIME_WAIT socket will be cleared immediately and a warning message will be printed. The default is 180000.
Net.ipv4.tcp_sack = 1
# in high-latency connections, SACK is particularly important for efficient use of all available bandwidth. High latency can result in a large number of packets waiting to be answered at any given time. In Linux, these packets will always be stored in the retransmission queue unless answered or no longer needed. These packets are queued by serial number, but there is no index of any kind. When you need to process a received SACK option, the TCP stack must find the packet with SACK applied in the retransmission queue. The longer the retransmission queue, the more difficult it is to find the data you need. Generally, this function can be turned off. Selective reply has a significant impact on performance over network connections with high bandwidth latency, but it can also be disabled without sacrificing interoperability. Set its value to 0 to disable the SACK feature in the TCP protocol stack.
Net.core.netdev_max_backlog = 262144
# the maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them
Net.ipv4.tcp_window_scaling = 1
# TCP window expansion factor support. If the maximum TCP window is more than 65535 (64K), set this value to 1. The Tcp window expansion factor is a new option, and some new implementations will include this option. In order to make the new and old protocols compatible, the following conventions are made: 1. Only the first syn of the active connection party can send the window expansion factor; 2. After the passive connection party receives the option with the window expansion factor, if it supports it, it can send its own window expansion factor, otherwise ignore this option 3. If both parties support this option, then the window expansion factor is used for subsequent data transfers. If the other party does not support wscale, then it should not respond to wscale 0, and should not send 1460 of the data when receiving the window of 46; if the other party supports wscale, then it should send a large number of data to increase throughput, not by shutting down wscale to solve the problem, if it is implemented using a universal protocol, then you need to turn off wscale to improve performance and just in case.
Net.ipv4.tcp_rmem = 4096 87380 4194304
# TCP read buffer
Net.ipv4.tcp_wmem = 4096 16384 4194304
# TCP write buffer
Net.ipv4.tcp_max_orphans = 3276800
# the maximum number of TCP sockets in the system that are not associated to any of the user file handles. If this number is exceeded, the orphan connection will be immediately reset and a warning message will be printed. This restriction is simply to prevent a simple DoS***, from relying too much on it or artificially reducing the value, but rather to increase it (if memory is added).
Net.ipv4.tcp_max_syn_backlog = 262144
# indicates the length of the SYN queue. The default is 1024, and the queue length is increased to 8192 to accommodate more network connections waiting for connections.
Net.ipv4.tcp_timestamps = 0
# timestamp can avoid winding the serial number. A 1Gbps link is sure to encounter a sequence number that has been used before. Timestamps allow the kernel to accept such "abnormal" packets. It needs to be turned off here.
Net.ipv4.tcp_synack_retries = 1
In order to open a peer-to-peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. It is the second handshake in the so-called three-way handshake. This setting determines the number of SYN+ACK packets sent by the kernel before the connection is abandoned.
Net.ipv4.tcp_syn_retries = 1
# for a new connection, how many SYN connection requests must be sent by the kernel before deciding to give up. Should not be greater than 255. The default value is 5.
Net.ipv4.tcp_tw_recycle = 1
# enable timewait Fast Recycling
Net.ipv4.tcp_tw_reuse = 1
# enable reuse. Allows TIME-WAIT sockets to be reused for new TCP connections.
Net.ipv4.tcp_mem = 94500000 915000000 927000000
# 1st is lower than this value, TCP has no memory pressure, 2nd enters memory pressure phase, and 3rdTCP refuses to allocate socket (in memory pages)
Net.ipv4.tcp_fin_timeout = 1
# indicates that if the socket is closed by the local request, this parameter determines the time for it to remain in the FIN-WAIT-2 state for 15 seconds
Net.ipv4.tcp_keepalive_time = 60
# indicates the frequency of keepalive messages sent by TCP when keepalive is used. The default is 2 hours, which changes to 1 minute.
Net.ipv4.tcp_keepalive_probes= 1
Net.ipv4.tcp_keepalive_intvl= 2
# means that if a TCP is connected to the idle for 2 minutes, the kernel initiates the probe. If the probe fails for 2 seconds each time, the kernel completely abandons the connection and considers the connection invalid.
Finally, to make the configuration effective immediately, you can use the following command:
# / sbin/sysctl-p
When we optimize the performance, we should first set the goal that the performance optimization needs to achieve, and then find the bottleneck and adjust the parameters to achieve the purpose of optimization. It is difficult to find performance bottlenecks, from a large range, through many use cases and tests, constantly narrow the scope, and finally determine the bottleneck, there are a lot of parameters to test while adjusting, which requires us to be more patient and persistent.
The above is all the content of the article "sample Analysis of Rhel5.x and CentOS5.x Kernel Optimization". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.