In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to achieve dynamic and static separation and load balancing between Nginx and Tomcat". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "how to achieve dynamic and static separation and load balancing between Nginx and Tomcat"!
I. introduction to nginx:
Nginx is a high-performance http and reverse proxy server with high stability and support for hot deployment and easy module expansion. When you encounter a peak of access, or when someone maliciously initiates a slow connection, it may also cause the server to run out of physical memory and exchange frequently, lose response, and can only restart the server. Nginx adopts phased resource allocation technology to deal with static files and cacheless reverse proxy acceleration to achieve load balancing and fault tolerance. In such a high concurrency access situation, it can withstand high concurrency processing.
II. Installation and configuration of nginx
Step 1: download the nginx installation package
Step 2: install nginx on linux
# tar zxvf nginx-1.7.8.tar.gz / / extract # cd nginx-1.7.8#./configure-- with-http_stub_status_module-- with-http_ssl_module// launch server status page and https module
A missing pcre library error is reported, as shown in the figure:
At this time, first perform the third step to install pcre, and then execute it on 3, which is fine.
4.make & & make install / / compile and install
5. Test if the installation configuration is correct, nginx is installed in / usr/local/nginx
# / usr/local/nginx/sbin/nginx-t, as shown in the figure
Step 3: install pcre on linux
# tar zxvf pcre-8.10.tar.gz / / extract cd pcre-8.10./configuremake & & make install// compile and install
3. Nginx + tomcat to realize dynamic and static separation
Dynamic and static separation means that nginx handles static pages (html pages) or pictures requested by the client, while tomcat handles dynamic pages (jsp pages) requested by the client, because static pages processed by nginx are more efficient than tomcat.
Step 1: we need to configure the nginx file
# vi / usr/local/nginx/conf/nginx.conf
# user nobody; worker_processes 1; error_log logs/error.log; pid logs/nginx.pid; events {use epoll; worker_connections 1024;} http {include mime.types; default_type application/octet-stream; log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"' Access_log logs/access.log main; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; server {listen 80 default; server_name localhost Location ~. *\. (html | htm | gif | jpg | jpeg | bmp | png | ico | txt | css) $/ / static pages {root / usr/tomcat/apache-tomcat-8081/webapps/root; expires 30d; / / cached to the client for 30 days} error_page 404 / 404.html; # redirect server error pages to the static page / 50x.html error_page 500502503504 / 50x.html Location = / 50x.html {root html;} location ~\. (jsp | do) ${/ / all dynamic requests for jsp are handed over to tomcat for proxy_pass http://192.168.74.129:8081; / / requests from the suffix of jsp or do are handed over to tomcat for proxy_redirect off; proxy_set_header host $host / / the backend web server can obtain the user's real ip proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; client_max_body_size 10m through x-forwarded-for; / / the maximum number of bytes per file that the client is allowed to request client_body_buffer_size 128k / / the maximum number of bytes requested by the buffer proxy is proxy_connect_timeout 90; / / the connection timeout between nginx and the backend server is proxy_read_timeout 90; / / after the connection is successful, the response time of the backend server is proxy_buffer_size 4k; / / set the buffer size of the proxy server (nginx) to store account information proxy_buffers 6 32k / / proxy_buffers buffer, if the average page size is less than 32k, set proxy_busy_buffers_size 64k buffer size / high load buffer size (proxy_buffers*2) proxy_temp_file_write_size 64k buffer bank / set cache folder size, greater than this value, it will be passed} from the upstream server.
Step 2: create a new index.html static page under webapps/root under tomcat, as shown below:
Step 3: start the nginx service
# sbin/nginx is shown in the figure:
Step 4: our page visit can display normal content normally, as shown in the figure:
Step 5: test the performance of static pages with high concurrency in nginx and tomcat?
The linux ab website stress test command is used to test the performance
1. Test the performance of nginx in dealing with static pages
Ab-c 100-n 1000
This means that 100 requests are processed simultaneously and the index.html file is run 1000 times, as shown in the figure:
two。 Test the performance of tomcat in dealing with static pages
Ab-c 100-n 1000
This means that 100 requests are processed simultaneously and the index.html file is run 1000 times, as shown in the figure:
The static performance of nginx is better than that of tomcat when dealing with the same static files. Nginx can request 5388 requests per second, while tomcat can only request 2609.
Summary: in the nginx configuration file, the static configuration is handed over to nginx, and the dynamic request is handed over to tomcat, providing performance.
IV. Nginx + tomcat load balancing and fault tolerance
In the case of high concurrency, in order to improve the performance of the server and reduce the concurrency pressure on a single server, we have adopted cluster deployment, which can also solve the problem of fault tolerance in order to avoid the failure of a single server and the service cannot be accessed.
Step 1: we have deployed tomcat servers for two days, 192.168.74.129 tomcat 8081 and 192.168.74.129 8082
Step 2: nginx is used as a proxy server. When the client requests the server, the load balancer is used to process it, so that the client requests can be distributed to the server every day averagely, thus reducing the pressure on the server. Configure the nginx.conf file under nginx.
# vi / usr/local/nginx/conf/nginx.conf
# user nobody; worker_processes 1; error_log logs/error.log; pid logs/nginx.pid; events {use epoll; worker_connections 1024;} http {include mime.types; default_type application/octet-stream; log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"' Access_log logs/access.log main; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; upstream localhost_server {ip_hash; server 192.168.74.129 gzip_buffers 8081; server 192.168.74.129 8082 } server {listen 80 default; server_name localhost; location ~. *\. (html | htm | gif | jpg | jpeg | bmp | png | ico | txt | js | css) $/ / static pages {root / usr/tomcat/apache-tomcat-8081/webapps/root; expires 30d processed by nginx; / / cached to the client for 30 days} error_page 404 / 404.html # redirect server error pages to the static page / 50x.html error_page 500502 503504 / 50x.hml; location = / 50x.html {root html;} location ~\. (jsp | do) ${/ / all dynamic requests for jsp are handed over to tomcat to process proxy_pass http://localhost_server; / / requests from the suffix of jsp or do to tomcat for proxy_redirect off Proxy_set_header host $host; / / the backend web server can obtain the user's real ip proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; client_max_body_size 10m through x-forwarded-for; / / the maximum number of single file bytes allowed to be requested by the client is client_body_buffer_size 128k / / the maximum number of bytes requested by the buffer proxy is proxy_connect_timeout 90; / / the connection timeout between nginx and the backend server is proxy_read_timeout 90; / / after the connection is successful, the response time of the backend server is proxy_buffer_size 4k; / / set the buffer size of the proxy server (nginx) to store account information proxy_buffers 6 32k / / proxy_buffers buffer, if the average page size is less than 32k, set proxy_busy_buffers_size 64k buffer size / high load buffer size (proxy_buffers*2) proxy_temp_file_write_size 64k buffer bank / set cache folder size, greater than this value, it will be passed} from the upstream server.
Description:
The server in 1.upstream refers to the ip (domain name) and port of the server, followed by parameters.
1) weight: sets the default value of the forwarding weight of the server to 1.
2) max_fails: used in conjunction with fail_timeout, it means that during the fail_timeout period, if the number of server forwarding failures exceeds the value set by max_fails, the server will not be available. The default value of max_fails is 1.
3) fail_timeout: indicates that the server is considered unavailable for the number of times it fails to be forwarded during this period.
4) down: indicates that this server cannot be used.
5) backup: this means that the ip_hash setting is invalid for this server, and the request will not be forwarded to the server until all non-backup servers fail.
The 2.ip_hash setting is in the servers in the cluster. If the same client request is forwarded to multiple servers, each server may cache the same information, which will result in a waste of resources. The ip_hash setting will forward the same client request for the same information the second time to the server side of the first request. However, ip_hash cannot be used with weight at the same time.
At this point, I believe you have a deeper understanding of "Nginx and Tomcat how to achieve dynamic and static separation and load balancing". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.