In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces nginx how to achieve load balancing and dynamic separation, the article is very detailed, has a certain reference value, interested friends must read it!
The following is a profile used by my project
# user nobody;worker_processes 410 # number of processes. Generally speaking, cpu is the number of cores to write # error_log logs/error.log;#error_log logs/error.log notice;#error_log logs/error.log info;#pid logs/nginx.pid;events {worker_connections 1024 + # the maximum number of connections for a single process} http {include mime.types; default_type application/octet-stream # log_format main'$remote_addr-$remote_user [$time_local] "$request" #'$status $body_bytes_sent "$http_referer" #'"$http_user_agent"$http_x_forwarded_for"; # access_log logs/access.log main; sendfile on; # tcp_nopush on; # keepalive_timeout 0; keepalive_timeout 65; proxy_connect_timeout 15s; proxy_send_timeout 15s; proxy_read_timeout 15s; fastcgi_buffers 8 128k Gzip on; client_max_body_size 30m; gzip_min_length 1k; gzip_buffers 16 64k; gzip_http_version 1.1; gzip_comp_level 6; gzip_types text/plain application/x-javascript text/css application/xml application/javascript image/jpeg image/gif image/png image/webp; gzip_vary on; # first cluster upstream xdx.com {server 119.10.52.28 weight=100 8081 weight=100; server 119.10.52.28 weight=100 } # the second cluster, upstream xdxfile.com {server 119.10.52.28 upstream xdx8082.com 8081 used for uploading pictures # all requests for file uploads visit this cluster} # third cluster upstream xdx8082.com {server 119.10.52.28 upstream xdxali.com 8082} # fourth cluster upstream xdxali.com {server 139.196.235.228ghu8082 Exchange # Aliyun} # fifth cluster upstream xdxaliws.com {server 139.196.235.228Hua 8886 # Aliyun websocket} # the first proxy server, listening on port 80, listening domain name is www.wonyen.com or wonyen.com server {listen 80th # listening port server_name www.wonyen.com wonyen.com;# listening domain name # charset koi8-r; # access_log logs/host.access.log main # location refers to the access path. The following configuration means that when you visit the root directory of a website, that is, wonyen.com or www.wonyen.com, go to the root directory where the root directory is html to find index.html or index.htm. In the index.html page, you can do some redirection work, jump to the specified page # or customize to a cluster # location / {# root html; # index index.html index.htm #} # all static requests are processed by nginx. The directory is root under webapps, and the expiration time is location ~\. (css | js | gif | jpg | jpeg | png | bmp | eot | svg | ttf | woff | mp3 | mp4 | wav | wmv | flv | f4v | json) ${root apache-tomcat-8.0.9-windows-x86-yipin-8081/apache-tomcat-8.0.9/webapps/ROOT; expires 30d;} # configure the processing cluster of requests ending in Att as http://xdxfile.com location ~ ^ /\ w+Att {proxy_pass http://xdxfile.com; } # configure the processing cluster of requests ending in Fill as http://xdxfile.com location ~ ^ /\ w+Fill {proxy_pass http://xdxfile.com;} # precise configuration. If the request name is / crowdFundSave, then location = / crowdFundSave {proxy_pass http://xdxfile.com;} # precise configuration, as above location = / crowdFundRewardSave {proxy_pass http://xdxfile.com; } # exact configuration, location = / garbageCategorySave {proxy_pass http://xdxfile.com;} # exact configuration, location = / mailTestAjax {proxy_pass http://xdx8082.com;} # exact configuration, location = / mailSendAjax {proxy_pass http://xdx8082.com;} # exact configuration, location = / mailOldAjax {proxy_pass http://xdx8082.com; } # precise configuration, as above # location = / wechatAuthority {# proxy_pass http://xdxali.com; #} location ~ ^ / ueditor1_4_3 {proxy_pass http://xdxfile.com;} # all other requests visit the cluster location of http://xdx.com ~. * ${index index; proxy_pass http://xdx.com;} # 404 page access / Error404.jsp location error_page 404 / Error404.jsp Pages such as # 500 also visit / Error404.jsp this location error_page 502 503 504 / Error404.jsp; # configuration request / Error404.jsp to visit http://xdxfile.com cluster location = / Error404.jsp {proxy_pass http://xdxfile.com;} # proxy the PHP scripts to Apache listening on 127.0.1 proxy the PHP scripts to Apache listening on 80 # # location ~. Php$ {# proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1 fastcgi_pass 9000 # # location ~. Php$ {# root html; # fastcgi_pass 127.0.0.1 php$ 9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME / scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\ .ht {# deny all #}} # another virtual host using mix of IP-, name-, and port-based configuration # another proxy server that listens to interface 8886 and listens to www.wonyen.com or wonyen.com server {listen 8886; server_name www.wonyen.com wonyen.com # configure that if the request is wonyen.com:8086 (root directory), let him access the http://xdxaliws.com cluster. What is configured here is websocket server location / {proxy_pass http://xdxaliws.com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade";} # HTTPS server # # server {# listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers Higger / null ssl_session_timeout MD5; # ssl_prefer_server_ciphers on; # location / {# root html; # index index.html index.htm; #} #}
The above is one of my configurations. Basically, everything you need to pay attention to is annotated in the configuration file. Let me talk about a few important points separately.
1. For the configuration of a cluster, I have defined multiple clusters in the above configuration. A cluster is literally a collection of servers, such as
Upstream xdx.com {server 119.10.52.28:8081 weight=100; server 119.10.52.28:8082 weight=100;}
In such a configuration, the cluster consists of two branches, and we can build the same project on two servers (in the above example, the same project is deployed on the same server, different ports, because the author's server is limited). When there is a request that needs to be processed by the cluster, the nginx will be randomly assigned. Of course, you can also configure weights to set the access probability of the two server. This is the principle of load balancing. We deploy the same project on multiple servers and use nginx to forward requests, which reduces the overload caused by only one server, and when one of the servers dies, nginx will assign another server to work, so that the service will not stop.
The 2.server configuration item represents a proxy server. In the above file, we configure two files to listen on ports 80 and 8886 of the two domain names wonyen.com (www.wonyen.com), respectively. All requests under the domain name wonyen.com:80 (that is, wonyen.com) are forwarded according to the rules defined by the first server, while all requests under the access wonyen.com:8886 are forwarded. It will be forwarded according to the rules defined by the second server.
3. We can even configure to deal with multiple domain names, see the following example. The following example I have configured two domain name rules, one is the iis server, the other is the tomcat server, the main purpose is to solve the problem that port 80 can only be used by one program. It won't work if iis uses 80th Tomcat, and vice versa. So I assigned ports other than 80 to both iis and tomcat, leaving port 80 for niginx. Nginx assigns requests to different sites.
Copy the code
# user nobody;worker_processes 1 the errorless log logs/error.log;#error_log logs/error.log notice;#error_log logs/error.log info;#pid logs/nginx.pid;events {worker_connections 1024;} http {include mime.types; default_type application/octet-stream # log_format main'$remote_addr-$remote_user [$time_local] "$request" #'$status $body_bytes_sent "$http_referer" #'"$http_user_agent"$http_x_forwarded_for"; # access_log logs/access.log main; sendfile on; # tcp_nopush on; # keepalive_timeout 0; keepalive_timeout 65; gzip on; client_max_body_size 30m; gzip_min_length 1k; gzip_buffers 16 64k Gzip_http_version 1.1; gzip_comp_level 6; gzip_types text/plain application/x-javascript text/css application/xml application/javascript image/jpeg image/gif image/png image/webp; gzip_vary on; upstream achina.com {server 120.76.129.218 access_log logs/host.access.log main 81;} upstream qgrani.com {server 120.76.129.218 upstream qgrani.com 8080;} server {listen 80; server_name www.achinastone.com achinastone.com; # charset koi8-r; # access_log logs/host.access.log main Location / {root html; index index.html index.htm;} # other requests location ~. * ${index index; proxy_pass http://achina.com;} # error_page 404 / 404.html; # redirect server error pages to the static page / 50x.html # error_page 500502 503 504 / 50x.hml; location = / 50x.html {root html } # proxy the PHP scripts to Apache listening on 127.0.0.1 proxy_pass 80 # location ~. Php$ {# proxy_pass fastcgi_param SCRIPT_FILENAME #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1 proxy_pass 9000 # location ~. Php$ {# root html; # fastcgi_pass 127.0.1 fastcgi_param SCRIPT_FILENAME / scripts$fastcgi_script_name; # include fastcgi_params #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # # location ~ /\ .ht {# deny all; #}} # another virtual host using mix of IP-, name-, and port-based configuration # server {listen 80; server_name www.qgranite.com qgranite.com; location / {root html; index index.html index.htm } # all static requests are handled by nginx. (css | js | gif | jpg | jpeg | png | bmp | swf | eot | svg | ttf | woff | mp4 | wav | wmv | flv | f4v) ${root apache-tomcat-8.0.9\ webapps\ ROOT; expires 30d;} # other requests location ~. * ${index index; proxy_pass http://qgrani.com;}} # HTTPS server # server {# listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; ssl_certificate_key cert.key # # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers Higgl location / {# root html; # index index.html index.htm; #}}
4. There is also a separation of movement and movement, which, to put it more colloquially, is to separate the request data (movement) from the request picture (static). In tomcat, when we do not do movement separation, tomcat regards the request for the picture as a dynamic request, and it is more performance-intensive to deal with dynamic requests (as for why, I am not quite sure). So we can use nginx configuration to achieve static and dynamic separation.
My approach is to put one of the tomcat projects in the root directory of nginx, so that we can configure it in the following ways, so that when we access static resources such as images, js,css, etc., we all go to a specified directory. In addition to saving performance, another advantage of this is that we don't need to keep these static resources synchronously in all load balancing servers, we just need to keep them in one place. The configuration is as follows
# all static requests are processed by nginx and stored in the root under webapps. The expiration time is 30 days location ~\. (css | js | gif | jpg | jpeg | png | bmp | swf | svg | ttf | woff | mp3 | mp4 | wav | wmv | flv | f4v | json) ${root apache-tomcat-8.0.9-windows-x86-yipin-8081/apache-tomcat-8.0.9/webapps/ROOT; expires 30d;}
5. Since reading static resources is read from this directory, we must consider how to store static resources, especially after we have done load balancing. The request for uploading pictures in our project may be called in any branch of the cluster. For example, if we have two servers in our cluster, they may both do the thing of uploading pictures. If An invokes the request to upload pictures, The picture is uploaded to server A, and vice versa. This will inevitably lead to the non-synchronization of the static pictures on the two servers. When we want to access these images, it is possible that we will not be able to access them (assuming that we have not separated them yet). Since we did the separation of movement and movement in the previous step, the problem now becomes how to synchronize the pictures uploaded by the Amagine B server to the folder where we do the separation of movement and movement. Manual or program synchronization is very troublesome, my approach is to specify a server (that is, the server installed by nginx) of the tomcat project (that is, the tomcat project deployed in the nginx root directory), so that it is specifically responsible for uploading pictures, so that all pictures are uploaded by this tomcat project, which ensures that the pictures in the static library are complete pictures. To do this, I configured a cluster, as follows.
# the second cluster, upstream xdxfile.com {server 119.10.52.28 upstream xdxfile.com 8081 http://xdxfile.com; used for uploading images # all requests for file uploads are accessed to this cluster} and then in location I configure: # configure the processing cluster for requests ending in Att as http://xdxfile.com cluster ~ ^ /\ w+Att {proxy_pass cluster } # configure the processing cluster of requests ending in Fill as http://xdxfile.com location ~ ^ /\ w+Fill {proxy_pass http://xdxfile.com;}
Because I add Att or Fill suffixes to all requests involving attachment uploads, when nginx captures these suffixed requests, it will hand them over to the http://xdxfile.com cluster, that is, 119.10.52.28 Fill 8081.
6. After doing load balancing, there is a problem that we have to face is the synchronization of memory data, we sometimes store some data in memory in the program, a typical type of data is session. How to make session data share session among the various branches of the cluster? here we need to use a new thing called redis.
These are all the contents of the article "how to achieve load balancing and dynamic separation in nginx". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.