In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Server Load Balancing is a network service device that distributes traffic to multiple CVMs (computing clusters). It can quickly improve the external service capability of the application system through traffic distribution; hide the actual service port to enhance the security of the internal system; by eliminating the service single point of failure, improve the reliability of the application system. Today, I will show you how Nginx+Tomcat achieves load balancing.
First of all, understand the knowledge points.
Cluster (Cluster)
To put it simply, N cloud servers are used to form a loosely coupled multiprocessor system (external is a server), and internal communication is realized through the network. Let N servers cooperate with each other to carry the request pressure of a website. In the words of the last author of Zhihu, it is "the same business, deployed on multiple servers", which is called clustering. Task scheduling is more important in the cluster.
Load balancing (Load Balance)
To put it simply, according to a certain load strategy, the request is distributed to each server in the cluster, and the whole server farm is allowed to process the request of the website, so as to complete the task together.
Nginx is a high-performance HTTP and reverse proxy server, as well as an IMAP/POP3/SMTP proxy server
Highly concurrent connections:
The official test can support 50,000 concurrent connections and reach 20,000 to 30,000 concurrent connections in the actual production environment.
two。 Low memory consumption:
With 30, 000 concurrent connections, the 10 Nginx processes started consume only 150 megabytes (15M*10=150M).
3. High stability:
For reverse proxy, the probability of downtime is minimal.
4. The configuration file is very simple:
The style is as easy to understand as the program.
5. Built-in health check function:
If one of the Web servers on the Nginx Proxy backend goes down, the front-end access will not be affected.
And so on.
Because of the good performance of nginx, many large domestic companies are using it, and the main reason is that nginx is open source and free. In addition to the series of functions described above, the project mainly uses nginx to implement the following three functions:
Static and dynamic separation load balancing reverse proxy
Separation of movement and movement:
The principle of dynamic and static separation is very simple. We can hand over some static resource html files, pictures, etc., to nginx, and forward the background request to the background server for processing. Because nginx has the function of caching, it not only speeds up the access speed, but also reduces the load of the tomcat server.
Reverse proxy:
Reverse proxy (Reverse Proxy) means that the proxy server accepts the connection request on the internet, then forwards the request to the server on the internal network, and returns the result obtained from the server to the client requesting connection on the internet. At this time, the proxy server is externally represented as a server.
Main features of load balancing:
Reduce the load on the background server and automatically remove the backstage downtime. The server caches the background request content to accelerate the request speed.
Main strategies of nginx load balancing:
Polling (default)
Each request is assigned to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated. Weight
Specify the polling probability. The weight is proportional to the access ratio, which is used in the case of uneven performance of the backend server. Ip_hash
Each request is allocated according to the hash result of accessing the ip, so that each visitor accesses a back-end server on a regular basis, which can solve the session problem. Fair
Requests are allocated according to the response time of the back-end server, and priority is given to those with short response time. Url_hash
Allocate requests according to the hash result of accessing url, so that each url is directed to the same backend server, which is more effective when the backend server is cached.
By default, there is a polling policy, but there is a problem with the use of this method, distributed session inconsistencies, but we can use ip_hash to allocate requests on the same ip address to a server for processing, so that there will be no session inconsistencies. This question is not the focus of our article. Interested readers can check the information on their own.
Configuration and installation
Tomcat and nginx are required (the first two chapters of the blog have a detailed installation and introduction of tomcat,nginx)
After successful installation
Tomcat profile description
Cd / usr/local/tomcat7/conf/
Catalina.policy: permission control profil
Catalina.properties:tomcat property profile
Server.xml: main profile
The main configuration server.xml file describes:
# # means that tomcat closes the port. By default, it is only available for local addresses. You can close tomcat locally through telnet 127.0.0.1 8005. # the default port number started by tomcat is the default port number when 8080.##tomcat starts the AJP1.3 connector.
We can create a new directory to set the default web page.
Mkdir-p / web/webappcd / web/webapp
Write a default web page for jsp
Vim index.jsp
JSP test1 page
# Edit the main configuration file
Vim server.xml # add the following sections
The above exists, add the following
Restart tomcat
/ usr/local/tomcat7/bin/shutdown.sh # turn off / usr/local/tomcat7/bin/startup.sh # start tomcat
Visit the tomcat page
Configure tomcat, and then configure nginx
Vim / usr/local/nginx/conf/nginx.conf
Configure in nginx.conf
Add red content to server (add under localtion) location / {root html; index index.html index.htm; proxy_pass http://tomcat-server; (tomcat-server is a custom name)} add the following to the http paragraph (above the curly braces on the last line) upstream tomcat-server {server 192.168.3.101 upstream tomcat-server 8080 weight=1; server 192.168.3.102 upstream tomcat-server 8080 weight=1;}
Restart nginx
Visit again (not tomcat, but nginx, and the visited page polls the page where tomcat appears. )
At this point, we have completed the test of nginx load balancing.
If there is anything else you need to know, you can find our professional technical engineer on the official website. The technical engineer has more than ten years of experience in the industry, so it will be more detailed and professional than the editor's answer. Official website link www.yisu.com
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.