Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of realizing load balance by setting up Nginx and Tomcat servers under Debian

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to build Nginx and Tomcat servers under Debian to achieve load balancing". The content of this article is simple and clear, and it is easy to learn and understand. Please follow the editor's ideas to study and learn "how to build Nginx and Tomcat servers under Debian to achieve load balancing".

The basic concept of load balancing

Load balancing (load balancing) is a computer network technology that is used to distribute load among multiple computers (computer clusters), network connections, cpu, disk drives, or other resources to optimize resource usage, maximize throughput, minimize response time, and avoid overload.

Using multiple server components with load balancing instead of a single component can improve reliability through redundancy. Load balancing services are usually done by dedicated software and hardware.

One of the most important applications of load balancing is to use multiple servers to provide a single service, which is sometimes referred to as a server farm. In general, load balancing is mainly used in web websites, large internet relay chat networks, high-traffic file download sites, nntp (network news transfer protocol) services and dns services. Now load balancers are also beginning to support database services, called database load balancers.

For Internet services, a load balancer is usually a software program that listens to an external port through which Internet users can access the service. the software as a load balancer forwards the user's request to the background intranet server, which returns the request to the load balancer, and the load balancer sends the response to the user. This hides the intranet structure from Internet users, prevents users from directly accessing the background (intranet) server, makes the server more secure, and can prevent attacks on the core network stack and other port services.

When all back-end servers fail, some load balancers provide special features to handle this situation. For example, forwarding a request to an alternate load balancer, displaying a message about a service outage, and so on. Load balancers enable it teams to significantly improve fault tolerance. It can automatically provide a large amount of capacity to handle any increase or decrease in application traffic.

Let's take a look at how to build a combination of nginx+tomcat servers with load balancing capabilities:

0. Preparation in advance

Use the debian environment. Install nginx (default installation), a web project, install tomcat (default installation), etc.

1. A nginx.conf profile

# define users and user groups running nginx if the corresponding server is exposed, it is recommended to use users with less privileges to prevent intrusion # the number of user www www;#nginx processes is recommended to be set to equal to the total number of cpu cores worker_processes 8 yes # enable the global error log type error_log / var/log/nginx/error.log info;# process file pid / var/run/nginx.pid # the maximum number of file descriptions opened by a nginx process is recommended to be consistent with ulimit-n # if you need to modify this value in the face of high concurrency, ulimit-n also has some system parameters rather than this alone determine worker_rlimit_nofile 65535; epoll model to improve performance use epoll; # maximum number of connections to a single process worker_connections 65535;} http {# extension and file type mapping table include mime.types # default type default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # Log access_log / var/log/nginx/access.log; error_log / var/log/nginx/error.log; # gzip compressed transfer gzip on; gzip_min_length 1k; # minimum 1k gzip_buffers 16 64k; gzip_http_version 1.1; gzip_comp_level 6 Gzip_types text/plain application/x-javascript text/css application/xml application/javascript; gzip_vary on; # load balancer group # static server group upstream static.zh-jieli.com {server 127.0.0.1 server 808 server} # dynamic server group upstream zh-jieli.com {server 127.0.0.1 server 8080; # server 192.168.203 server 8080;} # configure proxy parameter proxy_redirect off; proxy_set_header host $server Proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 65; proxy_send_timeout 65; proxy_read_timeout 65; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; # Cache configuration proxy_cache_key'$host:$server_port$request_uri'; proxy_temp_file_write_size 64k Proxy_temp_path / dev/shm/jielierp/proxy_temp_path; proxy_cache_path / dev/shm/jielierp/proxy_cache_path levels=1:2 keys_zone=cache_one:200m inactive=5d max_size=1g; proxy_ignore_headers x-accel-expires expires cache-control set-cookie;server {listen 80; server_name erp.zh-jieli.com; location / {index index; # default home page is / index # proxy_pass http://jieli;} location. *\. (js | css | ico | png | jpg | eot | svg | ttf | woff) {proxy_cache cache_one Proxy_cache_valid 200304 3025d; proxy_cache_valid any 5d; proxy_cache_key'$host:$server_port$request_uri'; add_header x-cache'$upstream_cache_status from $host'; proxy_pass http://static.zh-jieli.com; # all static files are read directly from hard disk # root / var/lib/tomcat7/webapps/jielierp/web-inf; expires 30d; # cache 30 days} # other pages are reverse proxied to tomcat container location ~. * ${index index Proxy_pass http://zh-jieli.com;}} server {listen 808; server_name static; location / {} location ~. *\. (js | css | ico | png | jpg | eot | svg | ttf | woff) {# all static files are read directly from the hard disk root / var/lib/tomcat7/webapps/jielierp/web-inf; expires 30d; # cache 30 days}

Basic configuration of this file, you can achieve the load. But it is more troublesome to understand the various relationships in it.

two。 Basic explanation

Now if there is a computer 192.168.8.203 this computer, the deployment of tomcat, port 8080 inside the j2ee service, through the browser can browse the web normally. Now there is a problem that tomcat is a more comprehensive web container, the processing of static web pages should be more resource-consuming, especially every time to read static pages from disk, and then return. This consumes tomcat resources and may have an impact on the performance of those dynamic page parsing. Adhering to the philosophy of linux, a software only does one thing. Tomcat should only deal with jsp dynamic pages. Here we use the previously known nginx to reverse proxy. The first step is to achieve the separation of dynamic and static web pages by proxy. This is very simple.

Worker_processes 8; pid / var/run/nginx.pid; worker_rlimit_nofile 65535; events {use epoll; worker_connections 65535;} http {include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048 Client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 65; proxy_send_timeout 65; proxy_read_timeout 65; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; server {listen 80; server_name xxx.com; location / {index index;} location ~. *\. (js | css | ico | png | jpg | svg | ttf | woff) {proxy_pass http://192.168.8.203:8080; expires 30d;} location ~. * ${index index Proxy_pass http://192.168.8.203:8080;} worker_processes 8: host; proxy_set_header x-real-ip / var/run/nginx.pid;worker_rlimit_nofile 65535: events {use epoll; worker_connections 65535;} http {include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048 Proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 65; proxy_send_timeout 65; proxy_read_timeout 65; proxy_buffer_size 4k; proxy_buffers 432k; proxy_busy_buffers_size 64k leading server {listen 80; server_name xxx.com; location / {index index;} location ~. *\. (js | css | ico | png | jpg | eot | svg | ttf | woff) {proxy_pass server Expires 30d;} location ~. * ${index index; proxy_pass http://192.168.8.203:8080;}

Modify the configuration file of nginx / etc/nginx/nginx.conf has a configuration file by default. In fact, most of them are similar, but the key is the setting of the server section. Here I set the server segment as shown above, and other segments can be copied. The explanation in the server section is as follows: the 35th behavior listens on port 80 of the native machine. Lines 37-39 represent the default home page, where the default home page I am index.jsp corresponds to an index in my project. It can be changed here as needed

Index index.jsp index.html index.htm index.php

Please refer to other articles for details. The key line 40, this is a regular match, and there are a lot of introductions online. This matches all the static page suffixes used in my project. Line 41 is the proxy address. Here I represent my web application. The expires 30d cache is 30 days. Here the cache corresponds to the cache-control field of the front-end page and the user.

The regular in line 44 matches the page without a suffix. The jsp page in my project has no suffix. You can modify it here as needed. The same agent goes to 192.168.8.203 VOL8080. At this point, you may ask, "Oh, my God, what's the point?" Of course not. To simply realize the static and dynamic separation, we can modify line 41 to

Root / var/lib/tomcat7/webapps/jielierp/web-inf

It means not acting as an agent and taking it directly from the local disk. By checking the tomcat log, you can see that the static page is not accessed. But there is another problem. This flexibility is not good, and it is not friendly to the memory caching and cluster deployment that we will talk about below, so we have the following way of writing. Write another server paragraph.

Server {listen 808; server_name static; location / {} location ~. *\. (js | css | ico | png | jpg | eot | svg | ttf | woff) {# all static files are read directly from the hard disk root / var/lib/tomcat7/webapps/jielierp/web-inf; expires 30d; # cache for 30 days}}

This time, listen to port 808, and then the above 41 lines of code can be changed to proxy_pass http://192.168.8.203:808, and here you have achieved the separation of movement and movement. If you have more than one server, just modify the corresponding ip. If you find that the connection is not available, check the firewall, permissions and other external problems, this configuration is like this.

If this is the case, we will find that the direct transmission of the page takes up too much bandwidth. Corresponding to the optimization of web, the idea here is to compress the page with gzip, then transmit it to the user, and then decompress it, which can effectively reduce the bandwidth. Nginx's gzip module will be used here. The default nginx is integrated with the gzip module. Just add the following configuration to the http section.

Gzip on; gzip_min_length 1k; # minimum 1k gzip_buffers 16 64k; gzip_http_version 1.1; gzip_comp_level 6; gzip_types text/plain application/x-javascript text/css application/xml application/javascript; gzip_vary on

Show me the results on the home page.

Never mind that the number of requests is different, those two requests are from Google plug-ins. Don't think I'm lying to you.

Caching must be an important thing for a website that is visited by a lot of people. At first, I want to synthesize nginx and redis through plug-ins, and then nginx uses redis to cache, but I find it troublesome to configure, and it's troublesome to download the plug-in and recompile nginx, so I think it's a good choice to use the cache that comes with nginx. Although it is not as efficient as redis, it is still better than none. The default cache for nginx is the cache of the disk file system, not the memory-level cache like redis. At first I thought that was all nginx had to do. Later, after checking the writing materials, I realized that I was too naive and didn't know much about linux. Everything in linux is a file. It turns out that we can cache files to the corresponding linux file system in memory. What I said may be difficult to understand, please search the / dev/shm file directory by yourself. We cache the files in this file directory, which is actually quite similar to the memory cache. It's just still managed by the file system. So it is not as good as the in-memory cache like redis in custom format.

Basic configuration in the http segment

# Cache configuration proxy_cache_key'$host:$server_port$request_uri';proxy_temp_file_write_size 64kbot proxy temptable path / dev/shm/jielierp/proxy_temp_path;proxy_cache_path / dev/shm/jielierp/proxy_cache_path levels=1:2 keys_zone=cache_one:200m inactive=5d max_size=1g;proxy_ignore_headers x-accel-expires expires cache-control set-cookie;location ~. *\. (js | css | ico | png | jpg | eot | svg | ttf | woff) {proxy_cache cache_one; proxy_cache_valid 200 304 302 5d Proxy_cache_valid any 5d; proxy_cache_key'$host:$server_port$request_uri'; add_header x-cache'$upstream_cache_status from $host'; proxy_pass http://192.168.8.203:808;expires 30d; # cache 30 days}

After the configuration of these two can basically be achieved, here are a few points for attention, is also a problem that has perplexed me for a long time. Line 6 of the first code above, proxy_ignore_headers if specified in the head header of the html in the web project

If these are not cached, you will have to add the configuration item of proxy_ignore_headers. Another point is that the file system permissions under / dev/shm are only granted to root users by default, so it is not very safe to chmod 777-r / dev/shm. If the line can actually be given a certain user group, the setting of the user group is the first line of the configuration.

User www www

Line 6 of the second section of code above is to add a header field to make it easier to see if it hits the cache.

All the files under our rm-rf / dev/shm/jielierp/proxy_* (note here that nginx-s reload is required to reread the configuration or restart the service if multiple tests are performed, because your rm-rf only deletes the cache file, but the cached structure information is still in the nginx process, and the structure is still there. If you do not restart, it will be inaccessible.)

So remember to restart. Here's how it works.

First visit

Second visit, ctrl+shift+r force refresh in browser

You can see the effect here. Let's take a look inside / dev/shm.

It's almost over here. Finally, it is also a key technical point, that is, clusters. This is going to use upstream. See the configuration file at the beginning? that's it.

# load balancer group # static server group upstream static {server 127.0.0.1 weight=1; server 192.168.203 weight=1; server 808 weight=1;} # dynamic server group upstream dynamic {server 127.0.0.1 weight=1; server 8080; # server 192.168.203 weight=1; server 8080;}

The one above is the cluster. Upstream is the keyword, and static and dynamic are the names of the two server cluster groups. Take the first example, server 127.0.0.1 server 808 is the server address, followed by the weight=1 is the weight. If you have more than one, write multiple. Pro-test, one of the cluster is broken, does not affect the operation of the system. As for more polling rules, you can refer to more information on the Internet. There's not much to say here. As for how to use it? Proxy_pass http://192.168.8.203:808 is changed to proxy_pass http://static; so that equilibrium can be achieved.

It's over here. Configure the above parts according to your own needs to achieve load balancing in a single computer room. One disadvantage of the above approach is that if the front nginx crashes, the machine will lose the ability to be accessed, so it is necessary to implement the load of multiple nginx multiple computer rooms in the front. This is another topic. It hasn't been studied yet. We'll talk about it later.

If the above dynamic server group is the kind that needs to save the user's state, there will be a problem, that is, the session problem. For example, after I log in in server1, the next time the dynamic server group is polled and assigned to server2, it will cause me to log in again. The way to cure the symptoms is to configure polling rules, hash according to the ip requested by the user, and then assign the corresponding server. The specific configuration is as follows:

Upstream dynamic {ip_hash;server 127.0.0.1 8080; server 192.168.0.203

In this way, one user corresponds to one server node. In this way, there will be no problem of repeated login. Another way to get to the root of the problem is to use the cache system for unified storage management of session.

Thank you for your reading. The above is the content of "how to build Nginx and Tomcat servers to achieve load balancing under Debian". After the study of this article, I believe you have a deeper understanding of the method of building Nginx and Tomcat servers under Debian to achieve load balancing, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report