In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
Today, the editor will share with you the relevant knowledge points about how to configure load balancing for TCP in the Nginx server. The content is detailed and the logic is clear. I believe most people still know too much about this, so share this article for your reference. I hope you can get something after reading this article. Let's take a look.
First, install nginx
1. Download nginx
# wget http://nginx.org/download/nginx-1.2.4.tar.gz
two。 Download the tcp module patch
# wget https://github.com/yaoweibin/nginx_tcp_proxy_module/tarball/master
Source code home page: https://github.com/yaoweibin/nginx_tcp_proxy_module
3. Install nginx
# tar xvf nginx-1.2.4.tar.gz# tar xvf yaoweibin-nginx_tcp_proxy_module-v0.4-45-ga40c99a.tar.gz# cd nginx-1.2.4# patch-p1 <.. / yaoweibin-nginx_tcp_proxy_module-a40c99a/tcp.patch#./configure-- prefix=/usr/local/nginx-- with-pcre=../pcre-8.30-- add-module=../yaoweibin-nginx_tcp_proxy_module-ae321fd/# make# make install
Second, modify the configuration file
Modify nginx.conf configuration file
# cd / usr/local/nginx/conf# vim nginx.confworker_processes 1 listen events {worker_connections 1024;} tcp {upstream mssql {server 10.0.1.201 worker_connections 1433 field server 10.0.1.202 worker_connections check interval=3000 rise=2 fall=5 timeout=1000;} server {listen 1433 position serverpassing mssql;}}
3. Start nginx
# cd / usr/local/nginx/sbin/#. / nginx
View port 1433:
# lsof: 1433
IV. Testing
# telnet 10.0.1.201 1433
Fifth, use sql server client tool to test
VI. The implementation principle of tcp load balancing
When nginx receives a new client link from the listening port, it immediately executes the routing scheduling algorithm, obtains the specified service ip to be connected, and then creates a new upstream connection to the specified server.
Tcp load balancer supports nginx's original scheduling algorithms, including round robin (default, polling scheduling), hash (consistent selection), etc. At the same time, the scheduling information data will also cooperate with the robustness detection module to select the appropriate target upstream server for each connection. If you use the scheduling method of hash load balancing, you can use $remote_addr (client ip) to achieve a simple persistent session (connections to the same client ip always fall on the same service server).
Like other upstream modules, tcp's stream module also supports custom forwarding weights for load balancing (configure "weight=2"), as well as parameters for backup and down to kick out failed upstream servers. The max_conns parameter can limit the number of tcp connections of a server, and set the appropriate configuration value according to the capacity of the server, especially in high concurrency scenarios, to achieve the purpose of overload protection.
Nginx monitors client connections and upstream connections. Once data is received, nginx will immediately read and push to upstream connections, and will not do data detection in tcp connections. Nginx maintains a memory buffer for client and upstream data writes. If the client or server transfers a large amount of data, the buffer will increase the memory size appropriately.
When nginx receives a notification from either party to close the connection, or when the tcp connection is idle for more than the time configured by proxy_timeout, the connection will be closed. For tcp persistent connections, we should choose the appropriate time for proxy_timeout, and at the same time, pay attention to listening to the so_keepalive parameters of socke to prevent premature disconnection.
Ps: service robustness monitoring
The tcp load balancer module supports built-in robustness testing. If an upstream server refuses tcp connections for longer than the proxy_connect_timeout configuration, it will be considered invalid. In this case, nginx immediately attempts to connect to another normal server in the upstream group. The connection failure information will be recorded in the error log of nginx.
If a server fails repeatedly (exceeding the parameters configured by max_fails or fail_timeout), nginx will also kick the server. 60 seconds after the server was kicked off, nginx occasionally tries to reconnect it to see if it is back to normal. If the server returns to normal, nginx adds it back into the upstream group, slowly increasing the proportion of connection requests.
It is "slowly increasing", because usually a service has "hot data", that is, more than 80% or more requests are actually blocked in the "hot data cache". Only a small number of requests are actually processed. When the machine is just started, the "hot data cache" has not actually been established. At this time, a large number of requests are forwarded abruptly, which is likely to cause the machine to "bear" and hang up again. Take mysql as an example, more than 95% of our mysql queries usually fall in memory cache, and not many queries are actually executed.
In fact, whether it is a single machine or a cluster, restarting or switching exists in high concurrent request scenarios, and there are mainly two ways to solve the problem:
(1) requests increase gradually, from less to more, accumulate hot spot data step by step, and finally reach normal service status.
(2) prepare "commonly used" data in advance, take the initiative to "warm up" the service, and then open access to the server after the preheating is completed.
The principle of tcp load balancing is the same as that of lvs, and it works at a lower level, and its performance will be much higher than that of the original http load balancer. However, no better than lvs, where lvs is placed in the kernel module, nginx works in user mode, and nginx is relatively heavy. In addition, it is a pity that this module turns out to be a paid function.
The tcp load balancer module supports built-in robustness testing. If an upstream server refuses tcp connections for longer than the proxy_connect_timeout configuration, it will be considered invalid. In this case, nginx immediately attempts to connect to another normal server in the upstream group. The connection failure information will be recorded in the error log of nginx.
If a server fails repeatedly (exceeding the parameters configured by max_fails or fail_timeout), nginx will also kick the server. 60 seconds after the server was kicked off, nginx occasionally tries to reconnect it to see if it is back to normal. If the server returns to normal, nginx adds it back into the upstream group, slowly increasing the proportion of connection requests.
It is "slowly increasing", because usually a service has "hot data", that is, more than 80% or more requests are actually blocked in the "hot data cache". Only a small number of requests are actually processed. When the machine is just started, the "hot data cache" has not actually been established. At this time, a large number of requests are forwarded abruptly, which is likely to cause the machine to "bear" and hang up again. Take mysql as an example, more than 95% of our mysql queries usually fall in memory cache, and not many queries are actually executed.
In fact, whether it is a single machine or a cluster, restarting or switching exists in high concurrent request scenarios, and there are mainly two ways to solve the problem:
(1) requests increase gradually, from less to more, accumulate hot spot data step by step, and finally reach normal service status.
(2) prepare "commonly used" data in advance, take the initiative to "warm up" the service, and then open access to the server after the preheating is completed.
The principle of tcp load balancing is the same as that of lvs, and it works at a lower level, and its performance will be much higher than that of the original http load balancer. However, no better than lvs, where lvs is placed in the kernel module, nginx works in user mode, and nginx is relatively heavy. In addition, it is a pity that this module turns out to be a paid function.
These are all the contents of the article "how to configure load balancing for TCP in a Nginx server". Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.