In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains the concept and usage of Nginx. The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn the concept and usage of Nginx.
Basic part
I. Environment
Server version: CentOS 7.2
In order to ensure that nothing strange happens during the learning stage, please ensure the following four points (God selectively ignore)
1. Confirm the system network
two。 Confirm that yum is available
3. Confirm to close iptables
4. Confirm to deactivate selinux
# View iptables status systemctl status firewalld.service# off Firewall (temporarily off) systemctl stop firewalld.service# View SELinux status getenforce# temporarily close SELinuxsetenforce 0
Install some basic system tools, normally the system will bring its own (not in installation)
What are yum-y install gcc gcc-c++ autoconf pcre pcre-devel make automakeyum-y install wget httpd-tools vim II and Nginx?
Nginx is an open source, high-performance and reliable HTTP middleware and proxy service.
Other HTTP services:
1. HTTPD-Apache Foundation
2. IIS- Microsoft
3. GWS-Google (closed to the public)
In recent years, the market share of Nginx is getting higher and higher, once soaring, why? And then we'll know!
Third, why do we choose Nginx?
1. IO Multiplexing epoll (IO Multiplexing)
How to understand it? Give me an example!
There are A, B, C three teachers, they all encounter a problem, to help a class of students solve the classroom homework.
Teacher An answers questions by taking turns to answer questions from the first row, teacher A wastes a lot of time, and some students haven't finished their homework, the teacher comes again and again, and the efficiency is very slow.
Teacher B is a ninja, he found that teacher A's method does not work, so he used the shadow identity technique, several of his own at the same time to help several students answer questions, and finally did not finish, teacher B consumed all the energy and was tired.
Teacher C is more shrewd, he told the students, who finished the homework raised their hands, there are raised hands of the students he went to guide the problem, he asked the students to take the initiative to speak up, separated from the "concurrency".
This teacher C is Nginx.
two。 Lightweight
Few functional modules-Nginx retains only the modules needed by HTTP, and the rest is added in the form of plug-ins.
Code modularization-more suitable for secondary development, such as Alibaba Tengine
3. CPU affinity
Bind the CPU core to the Nginx worker process, and fix each worker process on a CPU to reduce the cache miss of switching CPU, thus improving performance.
III. Installation and directory
I used the bird brother's lnmp integration package https://lnmp.org, simple and convenient-recommended!
# execute this sentence, according to the instructions, install nginx php mysql can go to the lnmp official website to view the more detailed process # default installation directory / usr/localwget-c http://soft.vpser.net/lnmp/lnmp1.4.tar.gz & & tar zxf lnmp1.4.tar.gz & & cd lnmp1.4 &. / install.sh lnmp# default installation directory / usr/ local IV, basic configuration # Open the main configuration file If you are using the lnmp environment to install vim / usr/local/nginx/conf/nginx.conf--user # to set up the nginx service, the system uses the user worker_processes # the number of worker processes is generally the same as the number of CPU cores. Error_log # nginx error log pid # nginx pidevents {worker at startup _ connections # maximum number of connections per process use # kernel model used by nginx}
We use nginx's http service to configure numerous server in the http area of the configuration file nginx.conf, each server corresponding to this virtual host or domain name
Http {... # the http configuration project server {listen 80 # listening port will be described in more detail later. Server_name localhost # address location / {# access homepage path root / xxx/xxx/index.html # default directory index index.html index.htm # default file} error_page 500504 / 50x.html # when the above status code appears from the new definition to 50x.html location = / 50x.html {# when accessing 50x.html Location of root / xxx/xxx/html # 50x.html page}} server {...}}
Multiple location can appear in a server. We configure different access paths in different situations.
Let's take a look at the configuration details of http.
Http {sendfile on # efficient file transfer mode must be turned on keepalive_timeout 65 # client server request timeout log_format main XXX # define log format code as main access_log / usr/local/access.log main # log save address format code main} V, module
Check the modules that have been opened and linked by nginx. There are too many modules, so you don't have to make a long speech here. You need to make your own Baidu.
# uppercase V view all modules, lowercase v view version nginx-V# to see if there are syntax errors in this configuration file nginx-tc / usr/local/nginx/conf/nginx.conf scenario implementation part 1, static resource WEB service
1. Static resource type
The generated file is not run dynamically by the server, in other words, the request for the corresponding file can be found directly on the server
Browser-side rendering: HTML,CSS,JS
Picture: JPEG,GIF,PNG
Video: FLV,MPEG
File: TXT, download any file you want
two。 Static Resource Service scenario-CDN
What is CDN? For example, if a Beijing user wants to request a file, and the file is placed in the resource storage center in Xinjiang, if the direct request for Xinjiang is too far away, it will be delayed for a long time. Use nginx static resources back-to-origin and distribute them to the resource storage center in Beijing to dynamically locate user requests to the resource storage center in Beijing to minimize transmission delay.
2.nginx static resource configuration
Configuration domain: high-speed reading of http, server, location# files http {sendfile on;} # when sendfile is enabled, turn on tcp_nopush to improve the efficiency of network packet transmission # tcp_nopush transfers files to the client at one time, as if you have ten packages, couriers deliver one at a time, ten times back and forth. After opening, the courier says to wait for you to send all ten packages together to you http {sendfile on; tcp_nopush on. } # tcp_nodelay enables real-time transmission, which is contrary to tcp_nopush and pursues real-time transmission, but it only works under persistent connections. Http {sendfile on; tcp_nopush on; tcp_nodelay on;} # compressed transfer of accessed files (reduce file resource size and increase transfer speed) # location ~. *\. (gif | jpg) ${gzip on when accessing resources ending in gif or jpg # enable gzip_http_version 1.1; # Server transfer version gzip_comp_level 2; # Compression ratio, the higher the compression, the higher the compression may consume the server performance gzip_types text/plain application/javascript application/x-javascript text/javascript text/css application/xml application/xml+rss image/jpeg image/gif image/png; # compressed file type root / opt/app/code # corresponding directory (go to this directory to find corresponding files)} # directly access compressed files # when the access path begins with download, such as www.baidu.com/download/test.img# to look for test.img.gz files in the / opt/app/code directory, the browsable img file location ~ load^ / download {gzip_static on # is opened when returned to the front end; tcp_nopush on; root / opt/app/code;} II, browser cache
Caching mechanism defined by HTTP protocol (e.g. Expires; Cache-control, etc.)
Reduce server consumption and latency
1. Browser has no cache
Browser request-> No cache-> request WEB server-> request corresponding-> render
During the rendering phase, the cache is generated in the browser according to the settings of the cache
two。 The browser has a cache
Browser request-> have cache-> verify whether the local cache time expires-> No expiration-> render
If it expires, request the WEB server again.
3. Syntax configuration
Location ~. *\. (html | htm) ${expires 12h; # cache 12 hours}
When the server responds to the static file, the request header information will be marked with etag and last_modified_since. The next time the browser requests, the header message sends these two tags. The server detects whether the file has changed. If none, the header message is directly returned to etag and last_modified_since. The status code is 304.The browser knows that the content has not changed, so it directly calls the local cache. This process also requests the service. But there is very little content.
III. Cross-site access
Develop nginx cross-site access settings
Location ~. *\. (html | htm) ${add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS; # Access-Control-Allow-Credentials true # allows cookie to cross-domain}
When Access-Control-Allow-Credentials is specified as true in the response, Access-Control-Allow-Origin cannot be specified as * and needs to be specified to a specific domain name.
For relevant cross-domain content, please refer to Laravel cross-domain function middleware to implement cross-domain using code. The principle is the same as nginx cross-domain configuration.
Fourth, hotlink protection
Prevent static resources in the server from being applied by other websites
The nginx hotlink protection method introduced here is based, and other more in-depth methods will be introduced in a later article.
First, you need to understand a nginx variable
$http_referer # indicates the address of the last page visited by the current request. In other words, visit the www.baidu.com home page, this is the first visit, so $http_referer is empty, but when you visit this page, you also need to get a home page image. When you request this picture, $http_referer is www.baidu.com.
Then configure
Location ~. *\. (jpg | gif) ${# valid_referers indicates which $http_referer we allow to access # none means there is no $http_referer, such as empty $http_referer on the first visit # blocked indicates that $http_referer is not a standard address, abnormal domain name, etc. # only allow this ip valid_referers none blocked 127.xxx.xxx.xx if ($invalid_referer) {# to have a variable value of 1 return 403 if not satisfied 5. HTTP proxy service
Nginx can implement a variety of proxy methods
HTTP
ICMPPOPIMAP
HTTPS
RTMP
1. Agency distinction
The difference is that the object of the agent is different.
The object of the forward proxy is the client
The object of the reverse proxy is the server
two。 Reverse proxy
Syntax: proxy_pass URL default:-- location: loaction# proxy port # scenario: server port 80 is open, port 8080 is closed, and the client needs to access 808. When configuring proxy_pass proxy forwarding in nginx, if you add / to the url after proxy_pass, it indicates the absolute root path. If there is no /, it indicates the relative path. Give the matching part of the path to the agent server {listen 80; location / {proxy_pass http://127.0.0.1:8080/; proxy_redirect default; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; # to get the real IP proxy_connect_timeout 30 of the client. # timeout proxy_send_timeout 60; proxy_read_timeout 60; proxy_buffer_size 32k; proxy_buffering on; # enable buffer to reduce disk io proxy_buffers 4 128k; proxy_busy_buffers_size 256k; proxy_max_temp_file_size 256k # save to file}} load balancing and caching services when memory is exceeded. 1. Load balancing
The implementation of load balancing is the reverse proxy we introduced in the previous chapter. Distribute customer requests to a group of different servers via nginx (reverse proxy)
This group of servers is called the service pool (upstream server). Each server in the pool is called a unit, and each unit in the service pool will request rotation training to achieve load balancing.
# configuration syntax: upstream name... Default:-- location: httpupstream # custom group name {server x1.baidu.com; # can be the domain name server x2.baidu.com; # server x3.baidu.com # down does not participate in the load balancer # weight=5; weight, the higher the distribution, the more # backup Reserved backup server # max_fails allowed number of failures # fail_timeout exceeds the number of failures, service pause time # max_coons limit maximum number of accepted connections # configure appropriate parameter # server 106.xx.xx.xxx depending on server performance; can be ip # server 106.xx.xx.xxmax_coons 8080; can take port number # server unix:/tmp/xxx Expenditure socket mode}
Suppose we have three servers and assume their IP addresses, front-end load balancing server A (127.0.0.1), background server B (127.0.0.2), and background server C (127.0.0.3)
Create a new file, proxy.conf, as follows, reverse proxy configuration described in the previous chapter
Proxy_redirect default;proxy_set_header Host $http_host;proxy_set_header X-Real-IP $remote_addr;proxy_connect_timeout 30 × proxypowered endurance timeout 60th proxylic readable timeout 60 * * proxy buffering on;proxy_buffers 4 128k proxy buffering size 256 kb * configuration http {... Upstream xxx {server 127.0.0.2; server 127.0.0.3;} server {liseten 80; server_name localhost; location / {proxy_pass http://xxx # upstream corresponding to the custom name include proxy.conf;}} # configuration of server B and server C server {liseten 80; server_name localhost Location / {index index.html}}
Scheduling algorithm
Rotational training: assigned to different back-end servers one by one in chronological order
Weighted rotation training: the higher the weight value, the higher the probability of assignment.
Ip_hash: each request is allocated according to the hash result of accessing the IP, so that a back-end server is regularly accessed from the same IP
Least_conn: the minimum number of links will be distributed to whoever has fewer connections.
Url_hash: assigns requests according to the hash result of the accessed URL, and each URL is directed to the same back-end server
Hash key value: hash custom key
Ip_hash configuration
Upstream xxx {ip_hash; server 127.0.0.2; server 127.0.0.3;}
Ip_hash has defects. When the current server has one more layer, it will not get the correct IP of the user, but will get the IP of the previous front-end server. Therefore, the nginx1.7.2 version has launched url_hash.
Url_hash configuration
Upstream xxx {hash $request_uri; server 127.0.0.2; server 127.0.0.3;} II. Cache service
1. Cache Typ
Server cache: the cache is stored in a back-end server, such as redis,memcache
Proxy cache: the cache is stored on a proxy server or middleware, and its content is obtained from the back-end server, but saved locally
Client-side cache: cached in the browser
2. Nginx proxy cache
The client requests nginx,nginx to check whether there is cached data locally. If so, return it directly to the client. If there is no further request from the backend server,
Http {proxy_cache_path / var/www/cache # cache address levels=1:2 # directory level keys_zone=test_cache:10m # open keys space name: space size (1m can hold 8000 key) max_size=10g # directory maximum size (timeout Inactive=60m # caches that are not accessed within 60 minutes will be cleaned up by use_temp_path=pff. # whether to open the temporary file directory and disable the default storage at the cache address server {... Location / {proxy_cache test_cache; # enable cache name, name proxy_cache_valid 200 304 12h in keys_zone; # cache with status code 200304 for 12 hours proxy_cache_valid any 10m; # other state cache for 10 hours proxy_cache_key $host$uri$is_args$args; # set key value add_header Nginx-Cache "$upstream_cache_status" }}}
When there is a specific request that we do not need to cache, add the following configuration to the above configuration
Server {... If ($request_uri ~ ^ / (login | register)) {# when the request address has login or register, set $nocache = 1; # set a custom variable to true} location / {proxy_no_cache $nocache $arg_nocache $arg_comment; proxy_no_cache $http_pragma $http_authoriztion;}}
3. Sharding request
Earlier versions of nginx do not support caching for sharding requests for large files, but the slice module has implemented this function since version 1.9.
The front end initiates a request, and nginx gets the size of the request file. If it exceeds the size of our defined slice, it will slice it and split it into multiple small requests to request the backend. To the front end, it becomes a separate cache file.
Tips: welcome to the official Wechat account: Java backend to get daily technical blog tweets.
Advantages: the data received by each sub-request will form a separate file, one request is interrupted, other requests are not affected, the original request is interrupted, and the file requested again will start from scratch, and the sharding request will be opened, and then get the unrequested small files.
Disadvantages: when the file is very large or the slice is very small, it may cause file descriptors to be exhausted.
Syntax: slice size; # when a large file is requested, set size to the size of each small file by default: slice 0; location: http/server/location FAQ 1, multiple virtual hosts with the same server_name priority # when the virtual host domain name is the same and restart nginx, there will be a warning ⚠️ processing, but it will not prevent nginx from continuing to use server {listen 80 Server_name www.baidu.com.} server {listen 80; server_name www.baidu.com.}. Give priority to the newly read configuration file. When multiple files are sorted through include, the earlier the file will be read. 2. Location matching priority = # for exact matching of ordinary characters and exact matching of ^ ~ # for ordinary character matching. The current prefix match ~\ ~ * # means to perform a regular match () # when the program uses an exact match, once the match is successful Other matches will be stopped # when the regular match is successful, the next match will be continued to see if there is a more accurate match. 3. The use of try_files
Check the existence of files sequentially
Location / {try_files $uri $uri/ / index.php } # first find whether a file exists under $uri, and if so, return it to the user directly # if no file exists under $url, then access the path of $uri/ again whether there is a file # or no file exists, and give it to index.php for example: location / {root / test/index.html try_files $uri @ test} location @ test {proxy_pass url 9090 } # when accessing /, check to see if the / test/index.html file exists # if not, let the program on port 9090 handle the request. 4. The difference between alias and root: location / request_path/image/ {root / local_path/image/ } # when we visit https://cache.yisu.com/upload/information/20210524/347/786645.png # will access the file location / request_path/image/ {alias / local_path/image/ under https://cache.yisu.com/upload/information/20210524/347/786646.png } # when we visit https://cache.yisu.com/upload/information/20210524/347/786645.png, # will access the file under https://cache.yisu.com/upload/information/20210524/347/786647.png. 5. If the user is real IP
When a request passes through multiple proxy servers, the user's IP will be overwritten by the proxy server IP
# set the set x_real_ip=$remote_addr# in the first proxy server and the last proxy server to get $xcoded realrealizable IP16, Nginx common error code 413 Request Entity Too Large # upload file is too large, set client_max_body_size503 bad gateway # backend service unresponsive 504 Gateway Time-out # backend service execution timeout Thank you for reading, this is the content of "the concept and usage of Nginx" After the study of this article, I believe you have a deeper understanding of the concept and usage of Nginx, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.