In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The main purpose of this article is to share the methods of implementing reverse proxy for Nginx services. The article also introduces the installation and configuration of Nginx service and the optimization scheme of Nginx service. I hope you can get something through this article.
Environmental preparation:
Three centos 7.5s, one of which runs Nginx and the other two run simple web services, which are mainly used to test the effect of Nginx reverse proxy
Download the package I provided, which is needed to install Nginx for caching and compression and other optimizations.
Note (the results are as follows):
Implementation of back-end web load balancing with proxy and upstream Modules
Using proxy module to implement static file caching
Combine the default ngx_http_proxy_module module and ngx_http_upstream_module module of nginx to realize the health check of the back-end server, or you can use the third-party module nginx_upstream_check_module
Using the nginx-sticky-module extension module to maintain the session
Use ngx_cache_purge for more powerful cache cleanup
Use ngx_brotli module to achieve web file compression.
The two modules mentioned above are third-party extension modules and need to be downloaded in advance (I included these modules in the previous download link), and then installed together through-- add-moudle=src_path at compile time.
1. Install Nginx [root @ nginx nginx-1.14.0] # yum-y erase httpd # uninstall the default httpd service of the system Prevent port conflicts [root@nginx nginx-1.14.0] # yum-y install openssl-devel pcre-devel # installation depends on [root@nginx src] # rz # rz command to upload the required source code package [root@nginx src] # ls # confirm the uploaded source code package nginx-sticky-module.zip ngx_brotli.tar.gznginx-1.14.0.tar.gz ngx_cache_purge-2.3.tar .gz # decompress the uploaded source package [root@nginx src] # tar zxf nginx-1.14.0.tar.gz [root@nginx src] # unzip nginx-sticky-module.zip [root@nginx src] # tar zxf ngx_brotli.tar.gz [root@nginx src] # tar zxf ngx_cache_purge-2.3.tar.gz [root@nginx src] # cd nginx-1.14.0/ # to the nginx directory [root@nginx nginx -1.14.0] # / configure-- prefix=/usr/local/nginx1.14-- user=www-- group=www-- with-http_stub_status_module-- with-http_realip_module-- with-http_ssl_module-- with-http_gzip_static_module-- http-client-body-temp-path=/var/tmp/nginx/client-- http-proxy-temp-path=/var/tmp/nginx/proxy-- http-fastcgi-temp-path=/var / tmp/nginx/fcgi-- with-pcre-- add-module=/usr/src/ngx_cache_purge-2.3-- with-http_flv_module-- add-module=/usr/src/nginx-sticky-module & & make & & make install# for compilation and installation And use the "--add-module" option to load the required module # Note that the ngx_brotli module is not loaded above, in order to show later how to add the module after the nginx service has been installed
The above compilation options are explained as follows:
-- with-http_stub_status_module: monitor the status of nginx through web pages
-- with-http_realip_module: obtain the real IP address of the client
-- with-http_ssl_module: enable the encrypted transmission function of nginx
-- with-http_gzip_static_module: enable compression
-- http-client-body-temp-path=/var/tmp/nginx/client: temporary storage path for client access data (path for cache storage)
-- http-proxy-temp-path=/var/tmp/nginx/proxy: ditto
-- http-fastcgi-temp-path=/var/tmp/nginx/fcgi: ditto
-- with-pcre: supports regular matching expressions
-- add-module=/usr/src/ngx_cache_purge-2.3: add the third-party module of nginx. The syntax is:-- add-module= third-party module path
-- add-module=/usr/src/nginx-sticky-module: ditto
-- with-http_flv_module: supports flv video streaming.
2. Start the Nginx service [root@nginx nginx-1.14.0] # ln-s / usr/local/nginx1.14/sbin/nginx / usr/local/sbin/# to create a soft connection for the nginx command So you can directly use [root@nginx nginx-1.14.0] # useradd-M-s / sbin/nologin www [root@nginx nginx-1.14.0] # mkdir-p / var/tmp/nginx/client [root@nginx nginx-1.14.0] # nginx- t # to check the nginx configuration file nginx: the configuration file / usr/local/nginx1.14/conf/nginx.conf syntax is oknginx: configuration file / usr/local/nginx1.14/conf/nginx. Conf test is successful [root@nginx nginx-1.14.0] # nginx # start the nginx service [root@nginx nginx-1.14.0] # netstat-anpt | grep ": 80" # check whether port 80 is listening to tcp 00 0.0.0. Nginx services implement reverse proxy
Before implementing this reverse proxy, I would like to say here, what is a reverse proxy? What is a forward agent?
1. Forward agent
Used to proxy the connection request of the internal network to Internet (such as NAT), the client specifies the proxy server, and sends the HTTP request that should be sent directly to the target Web server to the proxy server, then the proxy server accesses the Web server and sends back the information returned by the Web server to the client. At this time, the proxy server is the forward proxy.
2. Reverse proxy
Contrary to the forward proxy, if the local area network provides resources to the Internet and allows other users on the Internet to access the resources in the local area network, or you can set up a proxy server, the service it provides is a reverse proxy. The reverse proxy server accepts the connection from Internet, then forwards the request to the server on the internal network, and sends the return information of the web server back to
The client on the Internet that requests a connection.
To sum up: the object of the forward proxy is the client, instead of the client to access the web server; the object of the reverse proxy is the web server, and the proxy web server responds to the client.
3. Nginx configure reverse proxy
You can configure nginx as a reverse proxy and load balancer, and make use of its caching feature to cache static pages in nginx to reduce the number of connections to the back-end server and check the health status of the back-end web server.
The environment is as follows:
A Nginx server acts as a reverse proxy; two back-end web servers form a web server pool; the client accesses the Nginx proxy server and can refresh the page many times to get the pages returned by different back-end web servers. Start configuring the Nginx server: [root@nginx ~] # cd / usr/local/nginx1.14/conf/ # switch to the specified directory [root@nginx conf] # vim nginx.conf # Edit the main configuration file. # omit part of the content http {. . # omit some content upstream backend {sticky Server 192.168.20.2 weight=1 max_fails=2 fail_timeout=10s; 80 weight=1 max_fails=2 fail_timeout=10s; server 192.168.20.3 weight=1 max_fails=2 fail_timeout=10s;}. # omit part of the content server {location / {# root html # comment out the original root directory # index index.html index.htm; # comment out the line proxy_pass http://backend; # the "backend" specified here must correspond to the name of the web pool above. After editing, save and exit. [root@nginx conf] # nginx-t # check the configuration file and confirm that [root@nginx conf] # nginx-s reload # restart the nginx service to take effect
In the configuration of the above web server pool, there is a configuration item of "sticky", which actually loads the nginx-sticky module. The function of this module is to send requests from the same client (browser) to the same back-end server for processing through cookie paste. This can solve the problem of session synchronization of multiple backend servers to a certain extent (the so-called session synchronization is like logging in once when you visit the page. There is no need to log in again within a certain period of time, which is the concept of session), while the RR polling mode requires the operators to consider the implementation of session synchronization. In addition, the built-in ip_hash can also distribute requests according to the client IP, but it is easy to cause load imbalance. If the nginx has access from the same local area network, it receives the same client IP, which is easy to cause load imbalance. The cookie expiration time of nginx-sticky-module expires when the default browser is closed.
This module is not suitable for browsers that do not support Cookie or manually disable cookie, and the default sticky will be switched to RR. It cannot be used with ip_hash.
Sticky is only one of the scheduling algorithms supported by Nginx. Here are other scheduling algorithms supported by Nginx's load balancing module:
Polling (default, RR): each request is assigned to a different backend server one by one in chronological order. If a backend server goes down, the failed system is automatically eliminated, so that user access is not affected. Weight specifies the polling weight. The higher the Weight value, the higher the access probability assigned, which is mainly used in the case of uneven performance of each server at the backend. Ip_hash: each request is allocated according to the hash result of accessing IP, so that visitors from the same IP regularly access a back-end server, which effectively solves the session sharing problem of dynamic web pages. Of course, if this node is not available, it will be sent to the next node, and if there is no session synchronization, it will be logged out. Least_conn: the request is sent to the realserver with the least active connections currently. The value of weight is considered. Url_hash: this method distributes requests according to the hash results of accessing url, so that each url is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash, and if you need to use this scheduling algorithm, you must install Nginx's hash package nginx_upstream_hash. Fair: this is a smarter load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the page size and loading time, that is, the request can be allocated according to the response time of the back-end server, and the priority allocation of short response time. Nginx itself does not support fair, and if you need to use this scheduling algorithm, you must download the upstream_fair module of Nginx.
-
Explain the configuration behind the IP address of the web server in the web pool in the above configuration file:
Weight: polling weights can also be used in ip_hash. The default value is 1: the number of times the request is allowed to fail. Default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded. Fail_timeout: there are two meanings: one is that a maximum of 2 failures are allowed within 10 seconds; the other is that requests are not assigned to this server within 10 seconds after 2 failures.
The server configuration in the web server pool is as follows (for reference only, here we simply build the httpd service for testing purposes):
[root@web01 ~] # yum-y install httpd # install httpd service [root@web01 ~] # echo "192.168.20.2" > / var/www/html/index.html # two web servers prepare different web page files [root@web01 ~] # systemctl start httpd # start the web service
The second web server can do the same thing as above, but pay attention to prepare different web files in order to test the effect of load balancing.
Client access authentication is now ready, but it is important to note that the nginx proxy server must be able to communicate with both wbe servers.
Access your own test on the nginx proxy server (you can see that you are polling the web server in the web server pool):
If you use the Windows client for access testing, because there is a "sticky" configuration in the configuration file, each refresh request will be transferred to the same web server, and the effect of load balancer cannot be tested. You only need to comment out the line "sticky" to test the effect of load balancer.
III. Nginx service optimization
The so-called optimization, in addition to controlling its worker thread, there are several more important concepts, that is, caching and web page compression, because it involves more configuration items, I will write the complete configuration file of the http {} field below and comment it. An uncommented http {} field will be attached at the end of the blog post.
Before optimizing, it seems that when compiling and installing Nginx, I deliberately missed a module that was not loaded, just to show how to load if the required modules were not loaded.
The configuration is as follows:
[root@nginx conf] # cd / usr/src/nginx-1.14.0/ # switch to Nginx source package [root@nginx nginx-1.14.0] # nginx- V # execute "Nginx-V" View the loaded module nginx version: nginx/1.14.0built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017TLS SNI support enabledconfigure arguments:-- prefix=/usr/local/nginx1.14-- user=www-- group=www-- with-http_stub_status_module-- with-http_realip_module-- with-http_ssl_module-- with-http_gzip_static_module-- http -client-body-temp-path=/var/tmp/nginx/client-http-proxy-temp-path=/var/tmp/nginx/proxy-- http-fastcgi-temp-path=/var/tmp/nginx/fcgi-- with-pcre-- add-module=/usr/src/ngx_cache_purge-2.3-- with-http_flv_module-- add-module=/usr/src/nginx-sticky-module [root@nginx nginx-1.14.0] #. / configure-- prefix=/usr / local/nginx1.14-user=www-group=www-with-http_stub_status_module-with-http_realip_module-with-http_ssl_module-with-http_gzip_static_module-http-client-body-temp-path=/var/tmp/nginx/client-http-proxy-temp-path=/var/tmp/nginx/proxy-http-fastcgi-temp-path=/var/tmp/nginx/fcgi with-pcre add-module=/usr/src/ Ngx_cache_purge-2.3-- with-http_flv_module-- add-module=/usr/src/nginx-sticky-module-- add-module=/usr/src/ngx_brotli & & make# copies the loaded modules found above as follows Recompile the following, plus the modules to be added # for example, I added a third-party module "--add-module=/usr/src/ngx_brotli" [root@nginx nginx-1.14.0] # mv / usr/local/nginx1.14/sbin/nginx / usr/local/nginx1.14/sbin/nginx.bak# to change the name of the original Nginx control file Back up [root@nginx nginx-1.14.0] # cp objs/nginx / usr/local/nginx1.14/sbin/ # move the newly generated Nginx command to the appropriate directory [root@nginx nginx-1.14.0] # ln-sf / usr/local/nginx1.14/sbin/nginx / usr/local/sbin/ # soft connect the new nginx command [root@nginx ~] # nginx- s reload # nginx restart the service
At this point, the new module has been added.
1. Proxy cache usage of Nginx
Caching means caching static files such as js, css and image from the back-end server to the cache directory specified by nginx, which can not only reduce the burden on the back-end server, but also speed up access. However, timely cache cleaning has become a problem, so you need the ngx_cache_purge module to manually clean the cache before the expiration time.
The commonly used instructions in the proxy module are proxy_pass and proxy_cache.
The web caching function of nginx is mainly completed by proxy_cache, fastcgi_cache instruction set and related instruction set. Proxy_cache instruction is responsible for reverse proxy caching the static content of the back-end server. Fastcgi_cache is mainly used to deal with FastCGI dynamic process cache (caching dynamic pages is not recommended in production environment).
The configuration is as follows:
Http {include mime.types; default_type application/octet-stream; upstream backend {sticky; server 192.168.20.2:80 weight=1 max_fails=2 fail_timeout=10s; server 192.168.20.3:80 weight=1 max_fails=2 fail_timeout=10s } log_format main'$remote_addr-$remote_user [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"'# Note to delete the semicolon after this line. '$upstream_cache_status'; # add this line to record the cache hit rate to the log access_log logs/access.log main; # when you add the following lines to configure proxy_buffering on; # proxy, enable buffering the response proxy_temp_path / usr/local/nginx1.14/proxy_temp of the backend server The proxy_cache_path / usr/local/nginx1.14/proxy_cache levels=1:2 keys_zone=my-cache:100m inactive=600m max_size=2g;# server fields are configured as follows: server {listen 80; server_name localhost; # charset koi8-r; # access_log logs/host.access.log main Location ~ / purge (/. *) {# this purge field is used to manually clear cache allow 127.0.0.1; allow 192.168.20.0 24; deny all; proxy_cache_purge my-cache $host$1 $is_args$args;} location / {proxy_pass http://backend; # add the following configuration to this "/" field to configure cache-related proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 Proxy_cache my-cache; add_header Nginx-Cache $upstream_cache_status; proxy_cache_valid 200 304 301 3028 h; proxy_cache_valid 404 1m; proxy_cache_valid any 1d; proxy_cache_key $host$uri$is_args$args; expires 30d } # after editing Save and exit [root@nginx conf] # nginx-t # check the configuration file nginx: the configuration file / usr/local/nginx1.14/conf/nginx.conf syntax nginx: [emerg] mkdir () "/ usr/local/nginx1.10/proxy_temp" failed (2: No sucnginx: configuration file / usr/local/nginx1.14/conf/nginx.conf test failed# prompt that the corresponding directory did not find [root@nginx conf] # mkdir-p / usr / local/nginx1.10/proxy_temp # then create the corresponding directory [root@nginx conf] # nginx-t # check again OK nginx: the configuration file / usr/local/nginx1.14/conf/nginx.conf syntax is oknginx: configuration file / usr/local/nginx1.14/conf/nginx.conf test is successful [root@nginx conf] # nginx-s reload # restart the Nginx service
Client access test (using Google browser, press F12 before access):
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.