In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Blog structure
Reverse proxy
Proxy caching
Nginx optimization
one。 Reverse proxy (case)
1. Reverse proxy (Reverse Proxy) means that the client accepts the connection request by the proxy server, then forwards the request to the web server on the network (which may be apache, nginx, tomcat, iis, etc.), and returns the result from the web server to the client requesting the connection. At this time, the proxy server appears as a server.
As can be seen in the figure: the reverse proxy server proxy website Web server receives the Http request and forwards the request. And nginx, as a reverse proxy server, can forward requests to different web servers at the back end according to the content of user requests, such as static and dynamic separation, and then create multiple virtual hosts on nginx, thus successfully accessing different web servers or web clusters at the back end when entering different domain names (url) in the browser.
2. The role of reverse agent
① protects the website: any request from Internet must first go through the proxy server
② accelerates Web requests by configuring caching: it can cache some static resources on the real Web server and reduce the load on the real Web server.
③ implements load balancing: acts as a load balancing server to distribute requests evenly and balance the load pressure on each server in the cluster
Experimental environment
Download the nginx package
192.168.222.128 nginx Server
192.168.222.129 web
192.168.222.130 web
The nginx server operates as follows: [root@localhost /] # tar zxf ngx_cache_purge-2.3.tar.gz [root@localhost /] # unzip nginx-sticky-module.zip [root@localhost /] # tar zxf nginx-1.14.0.tar.gz [root@localhost /] # yum-y install pcre-devel openssl-devel [root@localhost /] # cd nginx-1.14.0/ [root@localhost nginx-1.14.0] #. / configure-- Prefix=/usr/local/nginx\-user=nginx-- group=nginx-- with-http_stub_status_module-- with-http_realip_module-- with-http_ssl_module\-- with-http_gzip_static_module-- http-client-body-temp-path=/var/tmp/nginx/client\-- http-fastcgi-temp-path=/var/tmp/nginx/fcgi\-with-pcre-- add-module=../ngx_cache_purge-2.3 \-add-module=../nginx-sticky-module\-with-http_flv_module\ [root@localhost nginx-1.14.0] # make & & make install [root@localhost nginx-1.14.0] # ln-s / usr/local/nginx/sbin/nginx / usr/local/sbin/ [root@localhost nginx-1.14.0] # nginx- tnginx: the configuration file / usr/local/nginx/conf/nginx.conf syntax is oknginx: [emerg] Getpwnam ("nginx") failed\\ can see the error report. Nginx user [root@localhost nginx-1.14.0] # useradd-s / sbin/nologin-M nginx [root@localhost nginx-1.14.0] # nginx- tnginx: the configuration file / usr/local/nginx/conf/nginx.conf syntax is oknginx: [emerg] mkdir () "/ var/tmp/nginx/client" failed (2: No such file or directory) nginx: configuration file / usr/local/nginx/conf/nginx.conf test failed\\ No error is displayed Create a directory [root@localhost nginx-1.14.0] # mkdir-p / var/tmp/nginx/client [root@localhost nginx-1.14.0] # nginx- tnginx: the configuration file / usr/local/nginx/conf/nginx.conf syntax is oknginx: configuration file / usr/local/nginx/conf/nginx.conf test is successful [root@localhost nginx-1.14.0] # nginx [root@localhost ~] # netstat-anpt | grep nginxtcp 00 0.0.0 .0:80 0.0.0.0: LISTEN 9886/nginx: master [root@localhost ~] # vim / usr/local/nginx/conf/nginx.conf\\ add the following upstream backend {sticky to http Server 192.168.222.129 weight=1 max_fails=2 fail_timeout=10s; 80 weight=1 max_fails=2 fail_timeout=10s; server 192.168.222.130 weight=1 max_fails=2 fail_timeout=10s; 80}\\ weight: polling weights can also be used in ip_hash. The default is 1max_fails: the number of failed requests is allowed. The default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded. Fail_timeout: there are two meanings: one is that a maximum of 2 failures are allowed within 10 seconds; the other is that requests are not assigned to this server within 10 seconds after 2 failures. \\ add to location, you can comment out the previous localtion location / {proxy_pass http://backend;}[root@localhost /] # nginx-s reload / / overload nginx service module explanation
Nginx-sticky-module module: the function of this module is to send requests from the same client (browser) to the same back-end server through cookie paste, which can solve the problem of session synchronization of multiple backend servers to a certain extent-- because synchronization is no longer needed, and RR polling mode requires operators to consider the implementation of session synchronization.
Other load-balance scheduling schemes:
Polling (default): each request is assigned to a different back-end server one by one in chronological order. If there is a server in the back-end, the failed system is automatically eliminated, so that user access is not affected. Weight specifies the polling weight. The higher the Weight value, the higher the access probability assigned, which is mainly used in the case of uneven performance of each server at the backend.
Ip_hash: each request is allocated according to the hash result of accessing IP, so that visitors from the same IP regularly access a back-end server, which effectively solves the session sharing problem of dynamic web pages. Of course, if this node is not available, it will be sent to the next node, and if there is no session synchronization, it will be logged out.
Least_conn: the request is sent to the realserver with the least active connections currently. The value of weight is considered.
Url_hash: this method allocates requests according to the hash results of accessing url, so that each url is directed to the same back-end service
The efficiency of the back-end cache server can be further improved. Nginx itself does not support url_hash, and if you need to use this scheduling algorithm, you must install Nginx's hash package nginx_upstream_hash.
Fair: this is a smarter load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the page size and loading time, that is, the request can be allocated according to the response time of the back-end server, and the priority allocation of short response time. Nginx itself does not support fair, and if you need to use this scheduling algorithm, you must download the upstream_fair module of Nginx.
Web [root@localhost ~] # yum-y install httpd [root@localhost ~] # echo aaaaaaaaaa > / var/www/html/index.html [root@localhost ~] # systemctl start httpd another web [root @ localhost ~] # yum-y install httpd [root@localhost ~] # echo bbbbbbbbbbb > / var/www/html/index.html [root@localhost ~] # systemctl start httpd test results are as follows: [root@localhost ~] # curl 127.0.0.1aaaaaaaaaaaaaaa [root@localhost ~] # curl 127. 0.0.1bbbbbbbbbbbb [root@localhost ~] # curl 127.0.0.1aaaaaaaaaaaaaaa [root@localhost ~] # curl 127.0.0.1bbbbbbbbbbbb\\ you can see that the nginx server sends the request to two web respectively. Edit the nginx startup script [root@localhost ~] # vim / etc/init.d/nginx #! / bin/bash#chkconfig: 2345 99 20#description: Nginx Service Control ScriptPROG= "/ usr/local/nginx1.10/sbin/nginx" PIDF= "/ usr/local/nginx1 .10 / logs/nginx.pid "case" $1 "in start) netstat-anplt | grep": 80 "& > / dev/null & & pgrep" nginx "& > / dev/null if [$?-eq 0] then echo" Nginx service already running. " Else $PROG-t & > / dev/null if [$?-eq 0]; then $PROG echo "Nginx service start success." Else $PROG-t fi fi;; stop) netstat-anplt | grep ": 80" & > / dev/null & & pgrep "nginx" & > / dev/null if [$?-eq 0] then kill-s QUIT $(cat $PIDF) echo "Nginx service stop success." Else echo "Nginx service already stop" fi;; restart) $0 stop $0 start Status) netstat-anplt | grep ": 80" & > / dev/null & & pgrep "nginx" & > / dev/null if [$?-eq 0] then echo "Nginx service is running." Else echo "Nginx is stop." Fi;; reload) netstat-anplt | grep ": 80" & > / dev/null & & pgrep "nginx" & > / dev/null if [$?-eq 0] then $PROG-t & > / dev/null if [$?-eq 0]; then kill-s HUP $(cat $PIDF) echo "reload Nginx config success." Else $PROG-t fi else echo "Nginx service is not run." Fi;; *) echo "Usage: $0 {start | stop | restart | reload}" exit 1 esac [root@localhost ~] # chmod + x / etc/init.d/nginx [root@localhost ~] # chkconfig-add nginx [root@localhost ~] # service nginx startNginx service start success. [root@localhost ~] # service nginx statusNginx service is running. II. Use of nginx cache
Caching means caching static files such as js, css and image from the back-end server to the cache directory specified by nginx, which can not only reduce the burden on the back-end server, but also speed up access. However, timely cache cleaning has become a problem, so you need the ngx_cache_purge module to manually clean the cache before the expiration time.
As long as the web caching function of nginx is accomplished by proxy_cache, fastcgi_cache instruction set and related instruction set:
Proxy_cache: responsible for reverse proxy caching static content of back-end servers
Fastcgi_cache: mainly used to deal with fastcgi dynamic process cache
Add the following [root@localhost ~] # vim / usr/local/nginx/conf/nginx.conf to the nginx main configuration file. / / omit part of log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"'"$upstream_cache_status"' / / record the buffer hit rate, note that this is a whole paragraph, so there is only a semicolon / / at the end that already exists, just add the last line! When the access_log logs/access.log main; proxy_buffering on; / / proxy is enabled, the response of the buffered backend server proxy_temp_path / usr/local/nginx/proxy_temp; / / defines the cache temporary directory proxy_cache_path / usr/local/nginx/proxy_cache levels=1:2 keys_zone=my-cache:100m inactive=600m max_size=2g;// defines the cache directory. The specific information has been explained outside the configuration file. / / omit part of the content location ~ / purge (/. *) {/ / define cache cleanup policy allow 127.0.0.1; allow 192.168.222.0Mab 24; deny all; proxy_cache_purge my-cache $host$1 $is_args$args } location / {proxy_pass http://lzj; / / request goes to the server list proxy_redirect off; defined by lzj to specify whether to modify the location header and refresh header values in the response header returned by the proxy server # for example: set the replacement text of the back-end server "Location" response header and "Refresh" response header. Assuming that the response header returned by the back-end server is "Location: http://localhost:8000/two/some/uri/", the instruction proxy_redirect # http://localhost:8000/two/ http://frontend/one/; will rewrite the string to" Location: # http://frontend/one/some/uri/". " Proxy_set_header Host $host; / / allows you to redefine or add the request header # Host, which indicates the hostname of the request, the nginx reverse proxy server sends the request to the real backend server, and the host field in the request header is rewritten to the server set by the proxy_pass directive. Because nginx uses # as a reverse proxy, and if the real back-end server is configured with similar hotlink protection or # routing or judging function based on the host field in the http request header, if the nginx in the reverse proxy layer does not override the host field in the request header, the request will fail. Proxy_set_header X-Real-IP $remote_addr; / / web server side gets the user's real ip, but, in fact, to get the user's real ip, you can also get the user's real ip through the following X-Forward-For proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for # the backend Web server can obtain the user's real IP,X_Forward_For field through X-Forwarded-For. # indicates who initiated the http request? If the reverse proxy server does not rewrite the request header, then the backend # real server will assume that all requests come from the reverse proxy server. If the backend has a protection policy #, then the machine will be blocked. Therefore, two configurations are generally added to the nginx configured as a reverse proxy in order to modify the request header proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 of the http # add failover. If the backend server returns errors such as 502,504 or execution timeout, # automatically forwards the request to another server in the upstream load balancer pool to achieve failover. Proxy_cache my-cache; add_header Nginx-Cache $upstream_cache_status; proxy_cache_valid 200 304 301 3028 h; proxy_cache_valid 404 1m; proxy_cache_valid any 1d; proxy_cache_key $host$uri$is_args$args; expires 30d } [root@localhost ~] # nginx-tnginx: the configuration file / usr/local/nginx/conf/nginx.conf syntax is oknginx: configuration file / usr/local/nginx/conf/nginx.conf test is successful// there is no problem with the detection configuration file [root@localhost ~] # nginx-s reload / / reload the nginx configuration file as follows:
Refresh the display such as:
Clear cach
When you re-visit 192.168.222.128, you can see that the cache has been clearly cached
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.