Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to achieve load balancing between nginx and tomcat CVM clusters is explained in detail.

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The following gives you a detailed explanation on how to achieve load balancing between nginx and tomcat CVM clusters. Hope to give you some help in practical application, load balancing involves more things, there are not many theories, there are many books online, today we will use the accumulated experience in the industry to do an answer.

1. Load balancing between nginx and tomcat

1. Create a file nginx-tomcat.conf in / usr/local/ngnix/conf

Contents of the file:

User nobody;worker_processes 2 user nobody;worker_processes events {worker_connections 1024;} http {# upstream configures a group of back-end servers. # after the request is forwarded to upstream, the nginx sends the request to a certain server according to policy # that is, the server farm information configured for load balancing is upstream tomcats {server 121.42.41.143http 8080; server 219.133.55.36 } server {listen 80; server_name 121.42.41.143; access_log logs/tomcat-nginx.access.log combined; # reverse proxy settings to send requests under all / paths to tomcat location / {# root html; index index.html index.htm on this machine # = proxy provided by Nginx = proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://tomcats;}

2. Use this configuration file to start nginx (shut down nginx before starting)

[root@iZ28b4kreuaZ bin] # / usr/local/nginx/sbin/nginx-c / usr/local/nginx/conf/nginx-tomcat.conf

Second, the detailed explanation of the configuration file:

View Code

III. Installation of fair policy

Fair strategy: according to the performance of each server, automatically select the server with strong responsiveness. This policy is provided by a third party, so it should be installed first.

Installation steps

1. Download gnosek-nginx-upstream-fair-a18b409.tar.gz

2. Decompress tar zxvf gnosek-nginx-upstream-fair-a18b409.tar.gz

3. Move the extracted file to the / usr/local directory and rename it to nginx-upstream-fair

4. Add this module to our installed nginx

A. First, go to the nginx-1.8.1 source file directory and execute:

[root@iZ28b4kreuaZ nginx-1.8.1] # / configure-- prefix=/usr/local/nginx-- add-module=/usr/local/nginx-upstream-fair/

B, execute: make to compile

C. Go to nginx-1.8.1/objs/ and overwrite the original / usr/local/nginx/sbin/nginx startup item with the latest nginx startup item.

[root@iZ28b4kreuaZ objs] # cp nginx/ usr/local/nginx/sbin

D. Open Nginx to see if it can be used.

4. Session sharing in distributed server clusters

Problem: when our user logs in on the tomcat1 server, tomcat1 will save the user's login information, but when the user's request is assigned to the tomcat2/tomcat3 server by the proxy server, tomcat2/tomcat3 will be unable to obtain the user's login information, resulting in the user needs to log in again. We have three solutions:

1. The request of the same user is locked on the same server, so that there is no problem of sharing session between different servers. This scheme is simple, but lacks fault tolerance (in the event of a server failure, the user's request will be assigned to another server, which requires re-login)

How to implement it: set the cluster policy to ip_hash

Upstream tomcats {ip_hash;}

2. Session replication mode: when the session value in any server changes, he will broadcast the change to other servers, and when other servers receive the broadcast, they will also make corresponding changes, so that session will always be in all servers. The disadvantage is that when there are many tomcat servers in the cluster, the network load will be increased and the performance will be low. Implementation method:

A. Configure session broadcast in the server.xml of tomcat

B. Add tags to the web.xml of our distributed application

The function is to announce that our application can be in a cluster environment.

3. By creating additional shared space to manage session, we generally use distributed cache technology redis, memcached cache technology, here we use memcached.

A. Installation of memcached: http://www.cnblogs.com/jalja/p/6121978.html

B. The session sharing principle of memcached

Sticky sharing:

Non-sticky:

C. Tomcat accesses the relevant environment of memcached (we use tomcat7)

1. Copy the jar package to the tomcat/lib directory, jar is divided into three categories

1) spymemcached.jar memcached java client

2) memcached-related packages memcached-session-manager- {version} .jar core packages memcached-session-manager-tc {tomcat-version}-{version} .jar Tomcat version-related packages

3) serialization toolkit, there are a variety of options, use jdk with serialization when not set, other optional kryo,javolution,xstream,flexjson and other msm- {tools}-serializer- {version} .jar other serialization tools related packages generally third-party serialization tools do not need to implement serializable interface

D. Configure Context and join the Manager MemcachedBackupSessionManager that handles session

Context configuration lookup order:

1) conf/context.xml global configuration, which works for all applications

2) conf/ [enginename] / [hostname] / context.xml.default global configuration, which works for all applications under the specified host

3) conf/ [enginename] / [hostname] / [contextpath] .xml only works on applications specified by contextpath

4) Application META-INF/context.xml only works in this application.

5) the application specified by Context docBase under conf/server.xml

If you only want session management to work on a specific application, you'd better set it up in 3Jing 4 mode. If you want to work for all, you can set it up with 1mem2mem5.

Configuration of conf/context.xml:

View Code

IV. Matters needing attention in the development of cluster environment

1. Entity classes should be serialized (implements Serializable)

Private static final long serialVersionUID = 3349238980725146825L

2. The way to obtain the client request address. Add the following configuration to nginx-tomcat.conf:

Server {location / {proxy_set_header X-Real-IP $remote_addr; # Real client IP}}

Java Code:

Public static String getIp (HttpServletRequest request) {String remoteIp = request.getRemoteAddr (); String headIp=request.getHeader ("X-Real-IP"); return headIp==null?remoteIp:headIp;}

3. Separation of movement and movement.

Put static files in the nginx server (css, js, pictures)

See the above detailed answers about how to achieve load balancing between nginx and tomcat CVM clusters. If there is anything else you need to know, you can find what you are interested in in the industry information or seek answers from our professional technical engineers, technical engineers have more than ten years of experience in the industry.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report