Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Implementation of High performance load balancing Cluster based on Nginx+Tomcat

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

1. Tomcat clusters that aim to achieve high-performance load balancing:

II. Steps

1. Download Nginx first, and download the stable version:

2. Then decompress the two Tomcat, named apache-tomcat-6.0.33-1 and apache-tomcat-6.0.33-2:

3. Then modify the startup ports of the two Tomcat, which are 18080 and 28080, respectively. Take the first Tomcat as an example, open the server.xml under the conf directory of Tomcat:

A total of 3 ports need to be modified:

Of course, the same is true for the second Tomcat, as shown below:

4. Then start the two Tomcat and visit them to see if they are normal:

5. Then modify the default pages of the above two Tomcat (in order to distinguish which Tomcat is accessed below, you can change it at will):

After the modification, conduct an interview, as shown in the following figure:

6. OK. Now we can configure Nginx to achieve load balancing. In fact, it is very simple. You only need to configure the configuration file of Nginx:

The configuration is as follows (only a simple configuration has been made here, and the actual production environment can be configured in more detail):

The number of worker_processes # working processes is generally the same as the number of cpu cores on the computer events {worker_connections 1024th # maximum number of connections per process (maximum number of connections = number of connections * processes)} http {include mime.types; # file extension and file type mapping table default_type application/octet-stream;# default file type sendfile on # enable efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output files. For common applications, set it to on. If it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the processing speed of disk and network Ibano and reduce the load of the system. Note: if the picture does not display properly, change this to off. Keepalive_timeout 65; # long connection timeout (in seconds) gzip on;# enables Gizp compression # cluster upstream netitcast.com {# server cluster name server 127.0.0.1 server 18080 server configuration weight means weight. The greater the weight, the greater the probability of allocation. Server 127.0.0.1 Nginx 28080 weight=2;} # current Nginx configuration server {listen 80th # listening on port 80, can be changed to other ports server_name localhost;# current service domain name location / {proxy_pass http://netitcast.com; proxy_redirect default;} error_page 500502 503 504 / 50x.html Location = / 50x.html {root html;}} worker_processes 1 include mime.types; # the number of working processes, which is generally the same as the number of cpu cores of the computer. Events {include mime.types; 1024 * maximum number of connections per process (maximum number of connections = number of connections * processes)} http {file extension and file type mapping table default_type application/octet-stream # the default file type sendfile on;# enables efficient file transfer mode, and the sendfile instruction specifies whether nginx calls the sendfile function to output files. For common applications, set it to on. If it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the processing speed of disk and network Icano and reduce the load of the system. Note: if the picture does not display properly, change this to off. Keepalive_timeout 65; # long connection timeout (in seconds) gzip on;# enables Gizp compression # cluster upstream netitcast.com {# server cluster name server 127.0.0.1 server 18080 server configuration weight means weight. The greater the weight, the greater the probability of allocation. Server 127.0.0.1 Nginx 28080 weight=2;} # current Nginx configuration server {listen 80th # listening on port 80, can be changed to other ports server_name localhost;# current service domain name location / {proxy_pass http://netitcast.com; proxy_redirect default;} error_page 500502 503 504 / 50x.html Location = / 50x.html {root html;}

The core configuration is as follows:

Now that the configuration is complete, let's demonstrate load balancing.

7. First, we start Nginx:

8. Then we can enter: localhost/index.jsp to check the health status

On the first visit, it was found that you were accessing a program on Tomcat2:

Then refresh and access the program on Tomcat2:

Refresh again, and find that it becomes a program on Tomcat1:

Refresh it again and find that it becomes a program on Tomcat2:

So far, we have implemented a load-balanced Tomcat cluster using Nginx. We keep refreshing and find that the probability of accessing Tomcat2 is about twice as high as that of Tomcat1. This is because the weight of the two Tomcat we configured in Nginx plays a role, as shown below:

III. Summary

Who would have thought it would be so easy to implement a high-performance load balancing cluster. Nginx is so powerful and easy to configure, why should we reject it? This is much cheaper than our F5 BIG-IP, NetScaler and other hardware load balancing switches that cost more than 100,000 to hundreds of thousands of yuan. In addition, don't forget that Nginx is not only a reverse proxy server, it can also host a website as a Web server for Http service processing.

The above is the whole content of this article, I hope it will be helpful to your study, and I also hope that you will support it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report