Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize load balancing through static and dynamic separation in Nginx+Tomcat

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about how to achieve Load Balancer through static and dynamic separation in Nginx+Tomcat. Many people may not know much about it. In order to let you know more, Xiaobian summarized the following contents for you. I hope you can gain something according to this article.

0. preliminary preparation

Use the Debian environment. Install Nginx(default installation), a web project, install tomcat(default installation), etc.

1. One Nginx.conf configuration file

Basic configuration of this file, you can achieve the load. However, it was troublesome to understand the various relationships inside. This blog post is not a teaching post, but a record to facilitate later reading by yourself.

2. Basic explanation

Now suppose there is a computer 192.168.8.203, which has Tomcat deployed, which has J2EE services on port 8080, and can browse web pages normally through a browser. Now there is a problem tomcat is a more comprehensive web container, the processing of static web pages, should be more expensive resources, especially each time to read static pages from disk, and then return.

This consumes Tomcat resources and may affect the performance of dynamic page parsing. The Linux philosophy is that software does only one thing. Tomcat should only handle JSP dynamic pages. Here we use the previously known Nginx to reverse proxy. The first step agent, realize the separation of dynamic and static web pages. It's very simple.

The requested URL/etc/nginx/nginx.conf was not found on this server. In fact, most of them are similar, the key is the setting of the server segment. Here I set the server segment as shown above, and the other segments can be copied.

The explanation in the server section is as follows: Line 35 listens on port 80 of the local machine. 37-39 The line represents the default home page, where the default home page I is index.jsp corresponds to an index in my project. This can be changed according to need.

indexindex.jspindex.htmlindex.htmindex.php

See other articles for details. The key line 40, this is regular matching, there are also many introductions on the Internet. This matches all static web suffixes used in my project. Line 41 is the proxy address. Here I delegate to my web application. expires 30d cache for 30 days, cache here is corresponding to the front-end page, user's Cache-Control field

The regular in line 44 matches pages without suffixes. Jsp pages in my project are suffix-free. This can be modified as needed. The same agent goes to 192.168.8.203: 8080 here. You might ask, what does that mean? Of course not. Simple implementation of static and dynamic separation, we can change the 41st line to

root /var/lib/tomcat7/webapps/JieLiERP/WEB-INF

It means no proxy, get it directly from local disk. By checking tomcat logs you can see that static pages are not visited. But there's another problem.

This flexibility is not good, and it is not friendly to memory caching and cluster deployment, which is discussed below, so the following is written again. Write another server section.

This time listen to port 808, and then the above code line 41 can be modified to proxy_pass http://192.168.8.203:808, here to achieve the dynamic separation. If there are multiple servers, you can modify the corresponding IP. If the connection is not found, check the firewall, permissions and other external issues, this configuration is such.

If this is the case, we will find that direct page transmission consumes too much bandwidth. Corresponding to the optimization of the web, the idea here is to compress the page by gzip, then transmit it to the user, and then decompress it, which can effectively reduce bandwidth. This is where the Nginx gzip module is used. The default Nginx is integrated with the gzip module. Just add the following configuration to the http section.

Give a home page to see the effect

Never mind the difference in the number of requests, those two requests came from the Google plugin. Don't think I'm lying to you.

Caching is definitely important for websites that get a lot of visitors.

At first, I wanted to synthesize Nginx and Redis through plug-ins, and then Nginx used Redis to cache, but found that it was very troublesome to configure, and I had to download plug-ins and recompile Nginx, which was more troublesome, so I think it is also a good choice to use Nginx's own cache.

Although it is not as efficient as redis, it is better than nothing. Nginx default cache is disk file system cache, not memory-level cache like Redis. At first I thought Nginx was just that. Later, I checked and wrote the information, only to know that I was too naive and did not know much about Linux. Everything in Linux is a file.

It turns out that we can cache files into memory corresponding to the Linux file system. What I said may be difficult to understand, please search the file directory/dev/shm for yourself. We cache files into this file directory, which is actually quite a cache with memory. But it's still managed by the file system. So it's not as good as a memory cache like custom format Redis.

Basic configuration in http segment

After these two configurations can basically be achieved, here to say a few attention items, but also troubled me for a long time. Line 6 of the first paragraph above, proxy_ignore_headers If specified in the html header of the web project

If these are not cached, the proxy_ignore_headers configuration item will be added. Another point is that the file system permissions below/dev/shm are only given to root users by default, so it is not very safe to chmod 777 -R /dev/shm. If the line can actually be given to a certain user group, the settings for the user group are the first line of configuration.

userwww www;

Line 6 of the second paragraph above adds a header field to see if the cache has been hit.

We rm -rf /dev/shm/JieLiERP/proxy_* all the files below (note that if you are doing multiple tests here, you have to nginx -s reload to read the configuration or restart the service, because you rm -rf only deleted the cache file, but the cache structure information is still in the nginx process, the structure is still there, if you do not restart, it will appear inaccessible)

So remember to reboot. Below is the running effect

first visit

Ctrl+Shift+R Force Refresh in Browser

You can see the effect here. Let's look at/dev/shm here.

It's almost over here. Finally, it is also a relatively key technical point, that is, clustering, clustering, clustering. This is going to use upstream. See the configuration file at the very beginning? That's it.

The one above is the cluster group. Upstream is the keyword, static and dynamic are the names of two server cluster groups. For example, server 127.0.0.1: 808 is the server address, followed by weight=1. Write as many as you can.

Pro tested, one of the cluster is broken, does not affect the system operation. For more polling rules, please refer to the website for more information. Not much here. How do you use it? proxy_pass

http://192.168.8.203:808 to proxy_pass http://static; this will achieve equilibrium.

This is where it ends.

Load Balancer can be realized by configuring the above parts according to their own needs. One disadvantage of the above approach is that if the nginx in the front is down, the machine will lose the ability to be accessed later, so it is necessary to implement multiple nginx multi-room loads in the front. That's another topic. No studies have been done yet. We'll talk about it later.

If the above dynamic server group needs to save user status, there will be a problem, that is, session problem, for example, after I log in to server1, the next dynamic server group may be assigned to server2 after polling, which will cause re-login.

The solution is to configure polling rules, hash according to the IP requested by the user, and then assign the corresponding server. The specific configuration is as follows:

In this way, one user corresponds to one server node. This way, there will be no problem of duplicate logins. Another solution is to use a cache system for unified storage management of sessions. I haven't tested the specific approach, reference materials have related articles, you can find out.

Nginx adds SSL function, the same Nginx default is SSL module function, we do not need additional installation, only need a simple configuration can be. First, let's create the necessary certificates. The production process is relatively simple.

The following is the configuration of Nginx, we can use client.pem, client.pem, client.key, insecure these three files into a directory of Nginx, the rest of the Nginx configuration is as follows:

Restart Nginx, we can visit the Https website. But his meow showed up.

This is no problem, the specific reason is that this CA certificate to be recognized. Therefore, the https certificate we generated above is only generated by ourselves. If we want to become the following one, we need to spend money to buy it. The rest can be solved online.

(Although the certificate generated by yourself can be used, it still cannot resist DNS spoofing, so this insecure certificate is the same as not having it.) However, it is said that this will prevent operators from hijacking.)

Add one that automatically jumps to a secure https connection when we enter an http connection. This is more practical. There are a variety of ways to do this, and you can see the blogs in Resources for details. I am using the following one, I think it is relatively simple, code changes are relatively small. This is a proxy forwarding on port 80.

After reading the above, do you have any further understanding of how to achieve Load Balancer through static and dynamic separation in Nginx+Tomcat? If you still want to know more knowledge or related content, please pay attention to the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report