In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the knowledge about "what is Socket segmentation in Nginx server". In the actual case operation process, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
The 1.9.1 release of nginx introduces a new feature: allowing the so_reuseport socket option, which is available in newer versions of many operating systems, including dragonfly bsd and linux (kernel version 3.9 and later). This socket option allows multiple sockets to listen on the same ip and port combination. The kernel is able to Load Balancer incoming connections among these sockets. (For nginx plus customers, this feature will appear in version 7, which will be released at the end of the year)
The so_reuseport option has many potential practical applications. Other services can also use it to simply implement rolling upgrades in progress (nginx already supports rolling upgrades through). For nginx, enabling this option reduces lock contention in certain scenarios and improves performance.
When the so_reuseport option is enabled, a separate listening socket notifies worker processes of incoming connections, and each worker thread attempts to acquire a connection, as depicted in the figure below.
When the so_reuseport option is enabled, there are multiple socket listeners bound to each ip address and port, one for each worker process. The system kernel determines which valid socket listener (implicitly, to which worker process) gets the connection. This reduces locking contention between worker processes for new connections and improves performance in multicore systems. However, this also means that when a worker process gets stuck, blocking affects not only the worker process that has already accepted the connection, but also the worker process that the kernel sends the connection request plan assignment.
Set up shared socket
For the so_reuseport socket option to work, the newly reuseport parameter should be introduced directly into the listen entry in the http or tcp (streaming mode) communication option, as in the following example:
The copy code is as follows:
http {
server { listen 80 reuseport;
server_name localhost;
...
}
}
stream {
server { listen 12345 reuseport;
...
}
}
After referencing the reuseport parameter, the accept_mutex parameter will have no effect on the referenced socket because mutex is redundant to the reuseport. For ports that do not use reuseport, it is still valuable to set accept_mutex.
Benchmark performance testing for reuseport
I ran the benchmark tool on a 36-core AWS instance to test 4 nginx worker processes. To reduce network impact, both the client and nginx run locally, and nginx returns an ok string instead of a file. I compare three nginx configurations: default (equivalent to accept_mutex on),accept_mutex off, and reuseport. As shown in the figure, reuseport requests per second are two to three times higher than the rest, while delay and delay standard deviation are also reduced.
I ran another related performance test--client and nginx on separate machines and nginx returned an html file. As shown in the table below, the latency reduction with reuseport is similar to the previous performance tests, with the standard deviation reduction in latency being even more significant (nearly one-tenth). Other results (not shown in the table) are equally encouraging. With reuseport, the load is split evenly among worker processes. Under default conditions (equivalent to accept_mutex on), some workers receive a higher percentage of load, while all workers receive a higher load with accept_mutex off.
The copy code is as follows:
latency (ms) latency stdev (ms) cpu load
default 15.65 26.59 0.3
accept_mutex off 15.59 26.48 10
reuseport 12.35 3.15 0.3
In these performance tests, the speed of connection requests was high, but requests did not require a lot of processing. Other basic tests should point out that reuseport also significantly improves performance when application traffic fits this scenario. (The reuseport parameter cannot be used in the mail context, e.g. email, under the listen directive, because email traffic will never match this scenario.) We encourage you to test it first rather than scale it up.
"Nginx server Socket segmentation is what" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.