In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Blog outline:
1. Installation of Nginx; 2. Reverse proxy of Nginx service; 3. Optimization of Nginx service. 1. Installation of Nginx
With regard to the basic concepts of Nginx, in the previous blog post: building a Nginx server and its configuration files are described in detail, this blog post starts directly from the installation.
Environmental preparation:
Three centos 7.5, one of which runs Nginx, and the other two run simple web services, which are mainly used to test the effect of Nginx reverse proxy; download the package I provided, which is needed when installing Nginx, for cache and compression and other optimization items.
Note (the results are as follows):
Combine proxy and upstream modules to achieve back-end web load balancing; use proxy modules to achieve static file caching; combine nginx default built-in ngx_http_proxy_module module and ngx_http_upstream_module module to achieve health check of back-end servers, or use third-party module nginx_upstream_check_module; to maintain sessions; use ngx_cache_purge to achieve more powerful cache clearing Use ngx_brotli module to achieve web file compression.
The two modules mentioned above are third-party extension modules and need to be downloaded in advance (I included these modules in the previous download link), and then installed together through-- add-moudle=src_path at compile time.
1. Install Nginx [root @ nginx nginx-1.14.0] # yum-y erase httpd # uninstall the default httpd service of the system Prevent port conflicts [root@nginx nginx-1.14.0] # yum-y install openssl-devel pcre-devel # installation depends on [root@nginx src] # rz # rz command to upload the required source code package [root@nginx src] # ls # confirm the uploaded source code package nginx-sticky-module.zip ngx_brotli.tar.gznginx-1.14.0.tar.gz ngx_cache_purge-2.3.tar .gz # decompress the uploaded source package [root@nginx src] # tar zxf nginx-1.14.0.tar.gz [root@nginx src] # unzip nginx-sticky-module.zip [root@nginx src] # tar zxf ngx_brotli.tar.gz [root@nginx src] # tar zxf ngx_cache_purge-2.3.tar.gz [root@nginx src] # cd nginx-1.14.0/ # to the nginx directory [root@nginx nginx -1.14.0] # / configure-- prefix=/usr/local/nginx1.14-- user=www-- group=www-- with-http_stub_status_module-- with-http_realip_module-- with-http_ssl_module-- with-http_gzip_static_module-- http-client-body-temp-path=/var/tmp/nginx/client-- http-proxy-temp-path=/var/tmp/nginx/proxy-- http-fastcgi-temp-path=/var / tmp/nginx/fcgi-- with-pcre-- add-module=/usr/src/ngx_cache_purge-2.3-- with-http_flv_module-- add-module=/usr/src/nginx-sticky-module & & make & & make install# for compilation and installation And use the "--add-module" option to load the required module # Note that the ngx_brotli module is not loaded above, in order to show later how to add the module after the nginx service has been installed
The above compilation options are explained as follows:
-- with-http_stub_status_module: monitor the status of the nginx through the web page;-- with-http_realip_module: obtain the real IP address of the client;-- with-http_ssl_module: enable the encrypted transmission function of nginx;-- with-http_gzip_static_module: enable the compression function;-- http-client-body-temp-path=/var/tmp/nginx/client: the temporary storage path of client access data (the path of cache storage) -- http-proxy-temp-path=/var/tmp/nginx/proxy: ditto;-- http-fastcgi-temp-path=/var/tmp/nginx/fcgi: ditto;-- with-pcre: support regular matching expressions;-- add-module=/usr/src/ngx_cache_purge-2.3: add nginx third-party module, syntax:-- add-module= third-party module path;-- add-module=/usr/src/nginx-sticky-module: ditto -- with-http_flv_module: supports flv video streaming. 2. Start the Nginx service [root@nginx nginx-1.14.0] # ln-s / usr/local/nginx1.14/sbin/nginx / usr/local/sbin/# to create a soft connection for the nginx command So you can directly use [root@nginx nginx-1.14.0] # useradd-M-s / sbin/nologin www [root@nginx nginx-1.14.0] # mkdir-p / var/tmp/nginx/client [root@nginx nginx-1.14.0] # nginx- t # to check the nginx configuration file nginx: the configuration file / usr/local/nginx1.14/conf/nginx.conf syntax is oknginx: configuration file / usr/local/nginx1.14/conf/nginx. Conf test is successful [root@nginx nginx-1.14.0] # nginx # start the nginx service [root@nginx nginx-1.14.0] # netstat-anpt | grep ": 80" # check whether port 80 is listening to tcp 00 0.0.0. Nginx services implement reverse proxy
Before implementing this reverse proxy, I would like to say here, what is a reverse proxy? What is a forward agent?
1. Forward agent
Used to proxy the connection request of the internal network to Internet (such as NAT), the client specifies the proxy server, and sends the HTTP request that should be sent directly to the target Web server to the proxy server, then the proxy server accesses the Web server and sends back the information returned by the Web server to the client. At this time, the proxy server is the forward proxy.
2. Reverse proxy
Contrary to the forward proxy, if the local area network provides resources to the Internet and allows other users on the Internet to access the resources in the local area network, or you can set up a proxy server, the service it provides is a reverse proxy. The reverse proxy server accepts the connection from Internet, then forwards the request to the server on the internal network, and sends the return information of the web server back to
The client on the Internet that requests a connection.
To sum up: the object of the forward proxy is the client, instead of the client to access the web server; the object of the reverse proxy is the web server, and the proxy web server responds to the client.
3. Nginx configure reverse proxy
You can configure nginx as a reverse proxy and load balancer, and make use of its caching feature to cache static pages in nginx to reduce the number of connections to the back-end server and check the health status of the back-end web server.
The environment is as follows:
A Nginx server acts as a reverse proxy; two back-end web servers form a web server pool; the client accesses the Nginx proxy server and can refresh the page many times to get the pages returned by different back-end web servers.
Start configuring the Nginx server:
[root@nginx ~] # cd / usr/local/nginx1.14/conf/ # switch to the specified directory [root@nginx conf] # vim nginx.conf # to edit the main configuration file. # omit part of the content http {. . # omit some content upstream backend {sticky Server 192.168.20.2 weight=1 max_fails=2 fail_timeout=10s; 80 weight=1 max_fails=2 fail_timeout=10s; server 192.168.20.3 weight=1 max_fails=2 fail_timeout=10s;}. # omit part of the content server {location / {# root html # comment out the original root directory # index index.html index.htm; # comment out the line proxy_pass http://backend; # the "backend" specified here must correspond to the name of the web pool above. After editing, save and exit. [root@nginx conf] # nginx-t # check the configuration file and confirm that [root@nginx conf] # nginx-s reload # restart the nginx service to take effect
In the configuration of the above web server pool, there is a configuration item of "sticky", which actually loads the nginx-sticky module. The function of this module is to send requests from the same client (browser) to the same back-end server for processing through cookie paste. This can solve the problem of session synchronization of multiple backend servers to a certain extent (the so-called session synchronization is like logging in once when you visit the page. There is no need to log in again within a certain period of time, which is the concept of session), while the RR polling mode requires the operators to consider the implementation of session synchronization. In addition, the built-in ip_hash can also distribute requests according to the client IP, but it is easy to cause load imbalance. If the nginx has access from the same local area network, it receives the same client IP, which is easy to cause load imbalance. The cookie expiration time of nginx-sticky-module expires when the default browser is closed.
This module is not suitable for browsers that do not support Cookie or manually disable cookie, and the default sticky will be switched to RR. It cannot be used with ip_hash.
Sticky is only one of the scheduling algorithms supported by Nginx. Here are other scheduling algorithms supported by Nginx's load balancing module:
Polling (default, RR): each request is assigned to a different backend server one by one in chronological order. If a backend server goes down, the failed system is automatically eliminated, so that user access is not affected. Weight specifies the polling weight. The higher the Weight value, the higher the access probability assigned, which is mainly used in the case of uneven performance of each server at the backend. Ip_hash: each request is allocated according to the hash result of accessing IP, so that visitors from the same IP regularly access a back-end server, which effectively solves the session sharing problem of dynamic web pages. Of course, if this node is not available, it will be sent to the next node, and if there is no session synchronization, it will be logged out. Least_conn: the request is sent to the realserver with the least active connections currently. The value of weight is considered. Url_hash: this method distributes requests according to the hash results of accessing url, so that each url is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash, and if you need to use this scheduling algorithm, you must install Nginx's hash package nginx_upstream_hash. Fair: this is a smarter load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the page size and loading time, that is, the request can be allocated according to the response time of the back-end server, and the priority allocation of short response time. Nginx itself does not support fair, and if you need to use this scheduling algorithm, you must download the upstream_fair module of Nginx.
Explain the configuration behind the IP address of the web server in the web pool in the above configuration file:
Weight: polling weights can also be used in ip_hash. The default value is 1: the number of times the request is allowed to fail. Default is 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded. Fail_timeout: there are two meanings: one is that a maximum of 2 failures are allowed within 10 seconds; the other is that requests are not assigned to this server within 10 seconds after 2 failures.
The server configuration in the web server pool is as follows (for reference only, here we simply build the httpd service for testing purposes):
[root@web01 ~] # yum-y install httpd # install httpd service [root@web01 ~] # echo "192.168.20.2" > / var/www/html/index.html # two web servers prepare different web page files [root@web01 ~] # systemctl start httpd # start the web service
The second web server can do the same thing as above, but pay attention to prepare different web files in order to test the effect of load balancing.
Client access authentication is now ready, but it is important to note that the nginx proxy server must be able to communicate with both wbe servers.
Access your own test on the nginx proxy server (you can see that you are polling the web server in the web server pool):
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.