Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the Nginx knowledge necessary for front-end developers?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "what Nginx knowledge front-end developers must have". The explanation in this article is simple, clear and easy to learn and understand. Please follow the editor's train of thought to study and learn what Nginx knowledge front-end developers must have.

The role of nginx in applications

Solve cross-domain

Request filtering

Configure gzip

Load balancing

Static resource server

Nginx is a high-performance HTTP and reverse proxy server, as well as a general-purpose TCP/UDP proxy server, originally written by Russian Igor Sysoev.

Nginx is now almost a must-have technology for many large websites, and in most cases, we don't need to configure it ourselves, but it is necessary to understand its role in the application and how to solve these problems.

Below I will explain the role of nginx in applications from the real application of nginx in the enterprise.

To make it easier to understand, let's first take a look at the basics. Nginx is a high-performance reverse proxy server, so what is a reverse proxy?

Forward proxy and reverse proxy

An agent is a hypothetical layer of server between the server and the client, where the agent receives the client's request and forwards it to the server, and then forwards the server's response to the client.

Whether it is a forward proxy or a reverse proxy, the above functions are implemented.

Forward agent

Forward proxy, which means a server between the client and the original server (origin server). In order to get content from the original server, the client sends a request to the agent and specifies the target (the original server), and then the agent transfers the request to the original server and returns the obtained content to the client.

The forward proxy serves us, that is, for the client, and the client can access the server resources that it cannot access according to the forward proxy.

The forward proxy is transparent to us and opaque to the server, that is, the server does not know whether it is receiving access from the agent or from the real client.

Reverse proxy

Reverse proxy (Reverse Proxy) means that the proxy server accepts the connection request on the internet, then forwards the request to the server on the internal network, and returns the result obtained from the server to the client requesting the connection on the internet. At this time, the proxy server behaves as a reverse proxy server.

Reverse proxy is for the server, reverse proxy can help the server to receive requests from the client, help the server to forward requests, load balancing and so on.

The reverse proxy is transparent to the server and opaque to us, that is, we do not know that we are accessing the proxy server, and the server knows that the reverse proxy is serving him.

Basic configuration

Configuration structure

The following is the basic structure of an nginx configuration file:

Events {} http {server {location path {...} location path {...}} server {...}}

The global configuration of main:nginx takes effect globally.

Events: the configuration affects the nginx server or network connection to the user.

Http: you can nest multiple server, configure proxies, caches, log definitions, and most other functions and configure third-party modules.

Server: configure the parameters of the virtual host. There can be multiple server in a http.

Location: configure the routing of requests and the handling of various pages.

Upstream: configure the specific address of the backend server, which is an indispensable part of the load balancer configuration.

Built-in variable

Here are some of the built-in global variables commonly used in nginx configurations, which you can use anywhere in the configuration.

| | variable name | function | |-|-| $host | Host in the request information. If there is no Host line in the request, it is equal to the set server name | | $request_method | client request type | For example, GET, POST | $remote_addr | client IP address | | $args | Parameter in the request | | $content_length | Content-length field in the request header | | $http_user_agent | client agent information | | $http_cookie | client cookie information | | $remote_addr | client IP address | | $remote_port | client port | | $server_protocol | requested protocol | Such as HTTP/1.0, HTTP/ 1.1` | | $server_addr | Server address | | $server_name | Server name | | $server_port | Server port number |

Solve cross-domain

First trace back to the source below, cross-domain is exactly what is going on.

Cross-domain definition

The same origin policy restricts how documents or scripts loaded from the same source interact with resources from another source. This is an important security mechanism for isolating potentially malicious files. Read operations between different sources are usually not allowed.

The definition of homology

If the protocol, port (if specified), and domain name of both pages are the same, then both pages have the same source.

The principle of nginx to solve Cross-domain

For example:

The domain name of the front-end server is: fe.server.com

The domain name of the backend service is: dev.server.com

Now when I make a request to dev.server.com in fe.server.com, there must be a cross-domain.

Now we just need to start a nginx server, set server_name to fe.server.com, then set the appropriate location to intercept requests that the front end needs to cross-domain, and then proxy the request back to dev.server.com. Such as the following configuration:

Server {listen 80; server_name fe.server.com; location / {proxy_pass dev.server.com;}}

This can skillfully bypass the same origin policy of the browser: the fe.server.com of fe.server.com accessing nginx belongs to the same origin access, and the request forwarded by nginx to the server will not trigger the same origin policy of the browser.

Request filtering

Filter according to status code

Error_page 500 501 502 503 504 506 / 50x.htl; location = / 50x.html {# adapt the following path to the path where html is stored. Root/ root/static/html;}

Filter based on the URL name, accurately match the URL, and all mismatched URL are redirected to the home page.

Location / {rewrite ^. * $/ index.html redirect;}

Filter by request type.

If ($request_method! ~ ^ (GET | POST | HEAD) $) {return 403;}

Configure gzip

GZIP is one of the three standard HTTP compression formats specified. At present, the vast majority of websites are using GZIP to transfer HTML, CSS, JavaScript and other resource files.

For text files, the effect of GZip is very obvious. When it is enabled, the traffic required for transmission will be reduced to about 1max 4 ~ 1max 3.

Not every browser supports gzip. How to know whether the client supports gzip or not? request the Accept-Encoding in the header to identify the support for compression.

Enabling gzip requires the support of both the client and the server. If the client supports the parsing of gzip, then as long as the server can return the files of gzip, gzip can be enabled. We can make the server support gzip through the configuration of nginx. The content-encoding:gzip in respone below refers to the compression method in which gzip is enabled on the server.

Gzip on; gzip_http_version 1.1; gzip_comp_level 5; gzip_min_length 1000; gzip_types text/csv text/xml text/css text/plain text/javascript application/javascript application/x-javascript application/json application/xml

Gzip

Turn the gzip module on or off

The default value is off

Can be configured as on / off

Gzip_http_version

The lower version of HTTP required to enable GZip

The default value is HTTP/1.1

Why isn't the default version 1.0 here?

HTTP runs on a TCP connection and naturally has the same three-way handshake, slow start and other features as TCP.

When persistent connections are enabled, the server responds and leaves the TCP connection open. Subsequent requests and responses between the same client / server pair can be sent over this connection.

In order to improve HTTP performance as much as possible, it is particularly important to use persistent connections.

HTTP/1.1 supports TCP persistent connections by default, and HTTP/1.0 can also enable persistent connections by explicitly specifying Connection: keep-alive. For HTTP messages on TCP persistent connections, the client needs a mechanism to accurately determine the end location, but in HTTP/1.0, this mechanism is only Content-Length. The block transmission mechanism corresponding to Transfer-Encoding: chunked added in HTTP/1.1 can solve this kind of problem.

Nginx also has the property chunked_transfer_encoding that configures chunked, which is enabled by default.

When GZip is enabled, Nginx will not wait for the file GZip to complete before returning the response, but will compress the response at the same time, which can significantly improve TTFB (Time To First Byte, first byte time, important metrics of WEB performance optimization). The only problem with this is that when Nginx starts to return a response, it doesn't know how big the file to be transferred will eventually be, that is, it can't give the response header Content-Length.

Therefore, if you enable GZip with Nginx in HTTP1.0, you cannot get Content-Length. As a result, you can only choose between opening persistent links in HTTP1.0 and using GZip, so here gzip_http_version is set to 1.1 by default.

Gzip_comp_level

Compression level, the higher the compression level, the higher the compression rate, and of course, the longer the compression time (faster transmission but more cpu consumption).

Default value is 1

The compression level is 1-9.

Gzip_min_length

Set the minimum number of bytes of pages allowed to be compressed. Requests with Content-Length less than this value will not be compressed.

Default value: 0

When the setting value is small, the compressed length may be larger than the original file. It is recommended to set it above 1000.

Gzip_types

File types to be compressed with gzip (MIME type)

Default: text/html (js/css is not compressed by default)

Load balancing

What is load balancing?

As shown in the figure above, there are many service windows in front and many users need services below. We need a tool or policy to help us assign so many users to each window to achieve full utilization of resources and less queuing time.

Think of the front service window as our back-end server, while the people behind the terminal are countless clients initiating requests. Load balancing is used to help us distribute a large number of client requests to each server reasonably, so as to make full use of server resources and reduce request time.

How to realize load balancing in nginx

Upstream specifies the list of backend server addresses

Upstream balanceServer {server 10.1.22.33 server 12345; server 10.1.22.34 server 12345; server 10.1.22.35

Intercepts the response request in server and forwards the request to the list of servers configured in Upstream.

Server {server_name fe.server.com; listen 80; location / api {proxy_pass http://balanceServer;}}

The above configuration only specifies the list of servers to be forwarded by nginx, and does not specify an allocation policy.

The Strategy of load balancing based on nginx

Polling strategy

By default, the policy that assigns all client request polling to the server. This strategy works fine, but if one of the servers is under too much pressure and there is a delay, it will affect all users assigned to this server.

Upstream balanceServer {server 10.1.22.33 server 12345; server 10.1.22.34 server 12345; server 10.1.22.35

Minimum number of connections policy

Prioritize requests to less stressed servers, which balances the length of each queue and avoids adding more requests to stressed servers.

Upstream balanceServer {least_conn; server 10.1.22.33 server 12345; server 10.1.22.34 server 12345; server 10.1.22.35

Fastest response time strategy

Depending on NGINX Plus, priority is given to the server with the shortest response time.

Upstream balanceServer {fair; server 10.1.22.33 server 12345; server 10.1.22.34 server 12345; server 10.1.22.35

Client-side ip binding

Requests from the same ip are always assigned to only one server, which effectively solves the problem of session sharing in dynamic web pages.

Upstream balanceServer {ip_hash; server 10.1.22.33 server 12345; server 10.1.22.34 server 12345; server 10.1.22.35

Static resource server

Location ~ *\. (png | gif | jpg | jpeg) ${root/ root/static/; autoindex on; access_log off; expires 10hashing # set the expiration time to 10 hours}

Matches the request ending with png | gif | jpg | jpeg and forwards the request to the local path, which is the nginx local path specified in root. At the same time, you can also make some cache settings.

Thank you for your reading. the above is the content of "what Nginx knowledge is necessary for front-end developers". After the study of this article, I believe you have a deeper understanding of what Nginx knowledge is necessary for front-end developers, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report