Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure Nginx http reverse proxy

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "how to configure Nginx http reverse proxy". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to configure Nginx http reverse proxy.

Overview

What is Nginx?

Nginx (engine x) is a lightweight Web server, reverse proxy server and email (IMAP/POP3) proxy server.

What is a reverse proxy?

Reverse proxy (Reverse Proxy) means that the proxy server accepts the connection request on the internet, then forwards the request to the server on the internal network, and returns the result obtained from the server to the client requesting the connection on the internet. At this time, the proxy server behaves as a reverse proxy server.

Use

Nginx is relatively simple to use, just a few commands.

The commonly used commands are as follows:

Nginx-s stop quickly shuts down Nginx, may not save relevant information, and quickly terminates the web service. Nginx-s quit closes Nginx smoothly, saves relevant information, and ends the web service in a scheduled manner. Nginx-s reload is overloaded because it changes the Nginx-related configuration and needs to reload the configuration. Nginx-s reopen reopens the log file. Nginx-c filename specifies a configuration file for Nginx instead of the default. Nginx-t does not run, but only tests the configuration file. Nginx checks the syntax of the configuration file for correctness and attempts to open the file referenced in the configuration file. Nginx-v displays the version of nginx. Nginx-V displays the version of nginx, compiler version, and configuration parameters.

If you don't want to type the command every time, you can add a new startup batch file startup.bat in the nginx installation directory and double-click to run it. The contents are as follows:

@ echo offrem if nginx is started before startup and the pid file is recorded, kill specifies the process nginx.exe-s stoprem test configuration file syntax correctness nginx.exe-t-c conf/nginx.confrem displays version information nginx.exe-vrem launches nginxnginx.exe-c conf/nginx.conf according to the specified configuration

If you are running under Linux, write a shell script, more or less the same.

Actual combat of nginx configuration

I always think that it will be easier for people to understand the configuration of various development tools in combination with actual combat.

Http reverse proxy configuration

Let's first achieve a small goal: regardless of the complex configuration, just complete a http reverse proxy.

The nginx.conf configuration file is as follows: note: conf / nginx.conf is the default configuration file for nginx. You can also use nginx-c to specify your profile

# run the user # user somebody;# to start the process, which is usually set to equal the number of cpu 1 to worker_processes # the global error log error _ log DVR _ # PID file, which records the currently launched nginx process IDpid DGINGRUGUM tool IDpid DGINGINXMAE 1.10.1 GINGINXMAE 1.10.1 GINGINX.pidscape # working mode and the upper limit of the number of connections events {worker_connections 1024 # set the maximum number of concurrent links for a single background worker process process} # set the http server and use its reverse proxy function to provide load balancing support http {# set mime type (mail support type). The type is defined by the mime.types file, include DJV _ hand tools 1.10.1 * * confme mime.types; default_type application/octet-stream # set log log_format main'[$remote_addr]-[$remote_user] [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"; access_log D:/Tools/nginx-1.10.1/logs/access.log main The rewrite_log on; # sendfile instruction specifies whether nginx calls the sendfile function (zero copy mode) to output files. For ordinary applications, # must be set to on, and if it is used for downloading and other application disk IO heavy-loaded applications, it can be set to off to balance the processing speed of disk and network IMago and reduce the uptime of the system. Sendfile on; # tcp_nopush on; # connection timeout keepalive_timeout 120; tcp_nodelay on; # gzip Compression switch # gzip on; # set the actual server list upstream zp_server1 {server 127.0.1 Vol 8089 } # HTTP server server {# listens on port 80, which is a well-known port number for HTTP protocol listen 80; # defines using www.xx.com to access server_name www.helloworld.com # Home page index index.html # points to webapp's directory root D:\ 01_Workspace\ Project\ github\ zp\ SpringNotes\ spring-security\ spring-shiro\ src\ main\ webapp; # Encoding format charset utf-8; # proxy configuration parameter proxy_connect_timeout 180 Proxy_send_timeout 180; proxy_read_timeout 180; proxy_set_header Host $host; proxy_set_header X-Forwarder-For $remote_addr; # reverse proxy path (and upstream binding). Set the mapping path location / {proxy_pass http://zp_server1; after location } # static files. Nginx handles location ~ ^ / (images | javascript | css | flash | media | static) / {root D:\ 01_Workspace\ Project\ github\ zp\ SpringNotes\ spring-security\ src\ main\ webapp\ views; # expired for 30 days. Static files are not updated very often, and can be set to larger or smaller if updated frequently. Expires 30d;} # set the address location / NginxStatus {stub_status on; access_log on; auth_basic "NginxStatus"; auth_basic_user_file conf/htpasswd to view the Nginx status } # disable access to .htxxx file location ~ /\ .ht {deny all;} # error handling page (optional) # error_page 404 / 404.html; # error_page 500502 503 504 / 50x.html # location = / 50x.html {# root html; #}

All right, let's try it:

Start webapp and note that the port that starts the binding should be consistent with the port set by upstream in nginx.

Change host: add a DNS record to the host file in the C:\ Windows\ System32\ drivers\ etc directory

127.0.0.1 www.helloworld.com

Start the startup.bat command in the previous article

Access to www.helloworld.com in a browser, and not surprisingly, you can already access it.

Load balancing configuration

In the previous example, the agent points to only one server.

However, in the actual operation of the website, most of the servers are running the same app, so you need to use load balancing to divert the traffic.

Nginx can also implement simple load balancing functions.

Suppose an application scenario: deploy the application on three servers in the linux environment: 192.168.1.11, 192.168.1.12, and 192.168.1.13, respectively. The domain name of the website is www.helloworld.com, and the public network IP is 192.168.1.11. Deploy nginx on the server where the public network IP resides, and load balance all requests.

The nginx.conf configuration is as follows:

Http {# sets the mime type, which is defined by the mime.type file include / etc/nginx/mime.types; default_type application/octet-stream; # sets the log format access_log / var/log/nginx/access.log # set the server list of cloud load balancer upstream load_balance_server {# weigth parameter to indicate the weight. The higher the weight, the greater the probability of being assigned. Server 192.168.1.11 server 80 weight=5; server 192.168.1.12 server 80 weight=6 192.168.1.13 server 80 } # HTTP server server {# listens on port 80 listen 80; # defines using www.xx.com to access server_name www.helloworld.com; # to load balance all requests location / {root / root # define the default web site root location of the server index index.html index.htm; # define the name of the home index file proxy_pass http://load_balance_server; # request is directed to the list of servers defined by load_balance_server # here are some configuration of reverse proxies (optional) # proxy_redirect off Proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; # backend Web server can obtain the user's real IP proxy_set_header X-Forwarded-For $remote_addr; proxy_connect_timeout 90 through X-Forwarded-For # nginx connection timeout with backend server (proxy connection timeout) proxy_send_timeout 90; # backend server data return time (proxy sending timeout) proxy_read_timeout 90; # response time of backend server (proxy receiving timeout) proxy_buffer_size 4k after successful connection # set the buffer size of proxy server (nginx) to store account information: proxy_buffers 4 32k; # proxy_buffers buffer, if the average page size is less than 32k, set proxy_busy_buffers_size 64k; # buffer size (proxy_buffers*2) proxy_temp_file_write_size 64k under high load # set the cache folder size, greater than this value, client_max_body_size 10m will be passed from the upstream server; # the maximum number of bytes per file requested by the client is allowed to client_body_buffer_size 128k; # the maximum number of bytes requested by the buffer proxy} the website has multiple webapp configuration

When a website has more and more functions, it is often necessary to peel off some relatively independent modules and maintain them independently. In that case, usually, there will be multiple webapp.

For example: suppose a www.helloworld.com site has several webapp,finance (Finance), product (products), and admin (user Center). Access to these applications is distinguished by context (context):

Www.helloworld.com/finance/

Www.helloworld.com/product/

Www.helloworld.com/admin/

We know that the default port number of http is 80. If you start all three webapp applications on one server at the same time, you will not be able to use port 80. Therefore, these three applications need to bind different port numbers.

So, the problem is that when users actually visit a www.helloworld.com site and visit different webapp, they will not visit it with the corresponding port number. So, once again, you need to use a reverse proxy to do the processing.

Configuration is not difficult, so let's see how to do it:

Http {# some basic configurations upstream product_server {server www.helloworld.com:8081;} upstream admin_server {server www.helloworld.com:8082;} upstream finance_server {server www.helloworld.com:8083 are omitted here } server {# some basic configurations are omitted here # default points to product's server location / {proxy_pass http://product_server;} location / product/ {proxy_pass http://product_server; } location / admin/ {proxy_pass http://admin_server;} location / finance/ {proxy_pass http://finance_server;} https reverse proxy configuration

Some sites with high security requirements may use HTTPS, a secure HTTP protocol that uses the ssl communication standard.

There are no popular science HTTP protocols and SSL standards here. However, there are a few things you need to know to configure https with nginx:

The fixed port number of HTTPS is 443, which is different from port 80 of HTTP

The SSL standard requires the introduction of a security certificate, so in nginx.conf you need to specify the certificate and its corresponding key

Everything else is basically the same as the http reverse proxy, except that the configuration in the Server section is a little different.

# HTTP server server {# listens on port 443. 443 is a well-known port number, mainly used for HTTPS protocol listen 443 ssl; # definition using www.xx.com to access server_name www.helloworld.com; # ssl certificate file location (common certificate file format is: crt/pem) ssl_certificate cert.pem; # ssl certificate key location ssl_certificate_key cert.key # ssl configuration parameter (optional configuration) ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; # Digital signature, here using MD5 ssl_ciphers Higgl _ ssl_prefer_server_ciphers on; location / {root / root; index index.html index.htm;}} static site configuration

Sometimes we need to configure static sites (that is, html files and a bunch of static resources).

For example: if all static resources are placed in the / app/dist directory, we only need to specify the home page in nginx.conf and the host of the site.

The configuration is as follows:

Worker_processes 1 leads events {worker_connections 1024;} http {include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png; gzip_vary on; server {listen 80 Server_name static.zp.cn; location / {root / app/dist; index index.html; # forward any request to index.html}

Then, add HOST:

127.0.0.1 http://

At this point, you can access the static site by accessing http:// in your local browser.

Set up a file server

Sometimes, when the team needs to archive some data or information, then the file server is essential. Using Nginx, you can quickly and easily build a simple file service.

Configuration points in Nginx:

Enable autoindex to display the directory, which is not enabled by default.

Turn on autoindex_exact_size to display the file size.

Turn on autoindex_localtime to display the modification time of the file.

Root is used to set the root path that is open to file services.

Setting charset to charset utf-8,gbk;, can avoid the problem of Chinese garbled code (after setting it under the windows server, it is still garbled, and I have not found a solution yet).

The simplest configuration is as follows:

Autoindex on;# display directory autoindex_exact_size on;# display file size autoindex_localtime on;# display file time server {charset utf-8,gbk; # windows server, still garbled, temporarily unsolved listen 9050 default_server; listen [:]: 9050 default_server; server_name _; root / share/fs;} cross-domain solution

The front and back end separation mode is often used in web domain development. In this mode, the front end and the back end are independent web applications, for example, the back end is a Java program, and the front end is a React or Vue application.

When independent web app visit each other, there are bound to be cross-domain problems. There are generally two ways to solve cross-domain problems:

CORS

Set the HTTP response header on the back-end server and add the domain name you need to run access to add to the Access-Control-Allow-Origin.

Jsonp

According to the request, the back-end constructs json data and returns it, and the front end uses jsonp to cross-domain.

These two ideas will not be discussed in this paper.

It should be noted that according to the first idea, nginx also provides a cross-domain solution.

For example: the www.helloworld.com website consists of a front-end app and a back-end app. The front end port number is 9000 and the back end port number is 8080.

If the front end and back end interact with each other using http, the request will be rejected because of cross-domain problems. Let's see how nginx solves it:

First, set the cors in the enable-cors.conf file:

# allow origin listset $ACAO'*'; # set single originif ($http_origin ~ * (www.helloworld.com) $) {set $ACAO $http_origin;} if ($cors = "trueget") {add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header' Access-Control-Allow-Credentials' 'true'; add_header' Access-Control-Allow-Methods' 'GET, POST, OPTIONS' Add_header 'Access-Control-Allow-Headers'' DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';} if ($request_method = 'OPTIONS') {set $cors "${cors} options";} if ($request_method =' GET') {set $cors "${cors} get";} if ($request_method = 'POST') {set $cors "${cors} post";}

Next, include enable-cors.conf in your server to introduce a cross-domain configuration:

#-- # this file is a fragment of the project nginx configuration # you can directly include (recommended) # or copy into the existing nginx in nginx config, and configure your own # www.helloworld.com domain name with dns hosts # where api enables cors Need to cooperate with another configuration file #-upstream front_server {server www.helloworld.com:9000 under this directory } upstream api_server {server www.helloworld.com:8080;} server {listen 80; server_name www.helloworld.com; location ~ ^ / api/ {include enable-cors.conf; proxy_pass http://api_server; rewrite "^ / api/ (. *) $" / $1 break;} location ~ ^ / {proxy_pass http://front_server;}}

At this point, it's done.

At this point, I believe you have a deeper understanding of "how to configure Nginx http reverse proxy". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report