Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to analyze the entry-level knowledge points of Nginx

2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

How to carry out Nginx entry-level knowledge point analysis, I believe that many inexperienced people do not know what to do. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

Nginx entry-level knowledge points

The basic concepts and knowledge of Nginx are provided. Based on the essential basic knowledge of Nginx for developers, some Nginx tutorials are listed in the hope that they will be helpful to you.

one。 Environment

Server version: CentOS 7.2

In order to ensure that nothing strange happens during the learning stage, please make sure of the following four points

Confirm the system network

Confirm that yum is available

Confirm to close iptables

Confirm to deactivate selinux

# View iptables status systemctl status firewalld.service# off Firewall (temporarily off) systemctl stop firewalld.service# View SELinux status getenforce# temporarily close SELinux setenforce 0

Install some basic system tools, normally the system will bring its own

Yum-y install gcc gcc-c++ autoconf pcre pcre-devel make automakeyum-y install wget httpd-tools vim II. Basic concept

What is 2.1Nginx?

Nginx is a high-performance http and reverse proxy server, which is characterized by small memory footprint and strong concurrency. Nginx is developed for performance optimization, and performance is its most important consideration. It can withstand high load and has been reported to support up to 50000 concurrent connections.

2.2 forward proxy and reverse proxy

To make it easier to understand, let's first take a look at the basics. Nginx is a high-performance reverse proxy server, so what is a reverse proxy?

An agent is a hypothetical layer of server between the server and the client, where the agent receives the client's request and forwards it to the server, and then forwards the server's response to the client.

Whether it is a forward proxy or a reverse proxy, the above functions are implemented.

If you are not familiar with the OSI seven-tier model and the TCP/IP four-tier model, you can review it again.

Forward agent

Forward proxy (forward) means a server located between the client and the original server (origin server). In order to get content from the original server, the client sends a request to the agent and specifies the target (the original server), and then the agent transfers the request to the original server and returns the obtained content to the client.

The forward proxy serves us, that is, for the client, and the client can access the server resources that it cannot access according to the forward proxy.

The forward proxy is transparent to us and opaque to the server, that is, the server does not know whether it is receiving access from the agent or from the real client.

Reverse proxy

Reverse proxy (Reverse Proxy) means that the proxy server accepts the connection request on the internet, then forwards the request to the server on the internal network, and returns the result obtained from the server to the client requesting the connection on the internet. At this time, the proxy server behaves as a reverse proxy server.

Reverse proxy is for the server, reverse proxy can help the server to receive requests from the client, help the server to forward requests, load balancing and so on.

The reverse proxy is transparent to the server and opaque to us, that is, we do not know that we are accessing the proxy server, and the server knows that the reverse proxy is serving him.

Upstream balanceServer {server 10.1.22.33 server 12345; server 10.1.22.34 server 12345; server 10.1.22.35 server 12345;} server {server_name fe.server.com; listen 80; location / api {proxy_pass http://balanceServer;}}

The above configuration only specifies the list of servers to be forwarded by nginx, and does not specify an allocation policy.

The polling policy is used by default, which assigns all client request polling to the server. This strategy works fine, but if one of the servers is under too much pressure and there is a delay, it will affect all users assigned to this server.

The load balancing scheduling algorithms supported by Nginx are as follows:

Weight polling (default, commonly used): requests received are assigned to different backend servers according to weight. Even if a backend server goes down during use, Nginx will automatically remove the server from the queue, and the acceptance of requests will not be affected in any way. In this way, you can set a weight value (weight) for different backend servers to adjust the allocation rate of requests on different servers. The larger the weight data, the greater the probability of being assigned to the request. The weight value is mainly adjusted for different backend server hardware configurations in the actual working environment. Ip_hash (commonly used): each request is matched according to the hash result of the ip of the initiating client. Under this algorithm, a client with a fixed ip address will always access the same backend server, which solves the problem of session sharing in the cluster deployment environment to a certain extent.

Fair:

The intelligent adjustment scheduling algorithm dynamically allocates the time from the request processing to the response of the back-end server dynamically, and the server with short response time and high efficiency has a high probability of being assigned to the request. The server with long response time and low efficiency distributes fewer requests; a scheduling algorithm that combines the advantages of the former two. However, it should be noted that Nginx does not support the fair algorithm by default, so if you want to use this scheduling algorithm, install the upstream_fair module. Url_hash: assign requests according to the hash result of the accessed url. The url of each request will be directed to a server fixed at the back end, which can improve cache efficiency when Nginx is used as a static server. Also note that Nginx does not support this scheduling algorithm by default, and you need to install Nginx's hash package to use it.

2.4 static and dynamic separation

In order to speed up the parsing speed of the server, dynamic pages and static pages can be parsed by different servers to speed up the parsing speed and reduce the pressure on the original single server.

2.5Nginx common commands

Quickly close Nginx, may not save the relevant information, and quickly terminate the web service nginx-s stop smoothly close Nginx, save the relevant information, the scheduled end of the web service nginx-s quit due to changes in the Nginx-related configuration, need to reload the configuration nginx-s reload reopen the log file nginx-s reopen# specifies a configuration file for Nginx to replace the default nginx-c filename# does not run, but only tests the configuration file. Nginx will check the syntax of the configuration file and try to open the file nginx-t # referenced in the configuration file to show the version of nginx nginx-v # shows the version of nginx, and the compiler version and configuration parameters nginx-V format shows the nginx configuration parameter 2 > & 1 nginx-V | xargs-n12 > & 1 nginx-V | xargs-N1 | grep lua III. Why choose Nginx?

Nginx is a free, open source, high-performance HTTP server and reverse proxy server; it is also an IMAP, POP3, SMTP proxy server; Nginx can be used as a HTTP server to publish websites, and Nginx can be used as a reverse proxy for load balancing. On the Nginx website, its features include:

HTTP and HTTPS (TLS / SSL / SNI)

Ultra-fast Web server for static content

FastCGI,WSGI,SCGI is used for dynamic content

Accelerated Web proxy with load balancing and caching

Continuous real-time binary upgrade and configuration

Compression and content filter

Virtual host

Media streaming for FLV and MP4

Bandwidth and connection policy

Comprehensive access control

Custom log

Embedded script

Mail agent for SMTP / IMAP / POP3 with TLS

Logical, flexible, extensible configuration

Running nginx on Linux,FreeBSD,Mac OS Xonomer Solaris and Windows has the following advantages:

1. IO Multiplexing epoll (IO Multiplexing)

How to understand it? Give me an example! There are A, B, C three teachers, they all encounter a problem, to help a class of students solve the classroom homework. Teacher An answers questions by taking turns to answer questions from the first row, teacher A wastes a lot of time, and some students haven't finished their homework, the teacher comes again and again, and the efficiency is very slow. Teacher B is a ninja, he found that teacher A's method does not work, so he used the shadow identity technique, several of his own at the same time to help several students answer questions, and finally did not finish, teacher B consumed all the energy and was tired. Teacher C is more shrewd, he told the students, who finished the homework raised their hands, there are raised hands of the students he went to guide the problem, he asked the students to take the initiative to speak up, separated from the "concurrency". This teacher C is Nginx.

two。 Lightweight

Fewer functional modules-Nginx only retains the modules needed by HTTP, and others use plug-ins. Add code modularization the day after tomorrow-more suitable for secondary development, such as Alibaba Tengine

3. CPU affinity

Bind the CPU core to the Nginx worker process, and fix each worker process on a CPU to reduce the cache miss of switching CPU, thus improving performance.

IV. Installation of Nginx

1. Local installation

Windows system: go directly to the official website: https://nginx.org/en/download... Just download the corresponding version. Mac system:

$brew install nginx

2.Linux installation:

Take the centOS system as an example, there are two installation methods (recommended 1) 1.) Install via rpm Mirror sourc

$rpm-ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm$ yum install-y nginx

2.) Detailed installation through dependent packages

Install nginx dependent libraries pcre, zlib

$yum install pcre pcre-devel$ yum install zlib zlib-devel

If necessary, you can install the C++ compilation environment and openssl

$yum install gcc-c++$ yum install openssl openssl-devel download / compile nginx$ wget-c https://nginx.org/download/nginx-1.16.0.tar.gz$ tar-zxvf nginx-1.16.0.tar.gz compilation installation $cd nginx-1.16.0 $. / configure # is installed at / usr/local/nginx$ make & & make install by default

Create a soft chain

$ln-s / usr/local/nginx/sbin/nginx / usr/local/sbin/nginx$ nginx-vV. Nginx configuration # Open the main configuration file If you are using the lnmp environment to install vim / usr/local/nginx/conf/nginx.conf--user # to set up the nginx service system using the user worker_processes # the number of working processes is generally the same as the number of CPU cores error_log # nginx error log pid # pidevents at nginx startup {worker_connections # maximum number of connections allowed per process use # kernel model used by nginx}

We use nginx's http service to configure numerous server in the http area of the configuration file nginx.conf, each server corresponding to this virtual host or domain name

Http {... # will introduce the http configuration project in more detail later.

Server {listen 80 # listening port Server_name localhost # address location / {# access home page path root / xxx/xxx/index.html # default directory index index.html index.htm # default file} error_page 500504 / 50x.html # when the above status code appears from the new definition to 50x.html location = / 50x.html {# location of the root / xxx/xxx/html # 50x.html page when accessing 50x.html}} server {...}

} multiple location can appear in a server. Let's configure different access paths in different situations. Let's take a look at the configuration details of http.

Http {sendfile on # efficient file transfer mode must turn on keepalive_timeout 65 # client server request timeout log_format main XXX # define log format code as main access_log / usr/local/access.log main # log save address format code main}

Here are some of the built-in global variables commonly used in nginx configurations, which you can use anywhere in the configuration.

VI. Nginx actual combat

The configuration of various development tools combined with actual combat will make people easier to understand.

Http reverse proxy

Let's first achieve a small goal: regardless of the complex configuration, just complete a http reverse proxy.

The nginx.conf configuration file is as follows:

Note: conf/nginx.conf is the default profile for nginx. You can also use nginx-c to specify your profile # run user # user somebody

# start the process, which is usually set to equal the number of cpu worker_processes 1

# Global error log error_log DVERGRAMUR toolsGINXMAE 1.10.1 login error. Log; error_log D:/Tools/nginx-1.10.1/logs/notice.log notice; error_log D:/Tools/nginx-1.10.1/logs/info.log info

# PID file to record the process ID pid D:/Tools/nginx-1.10.1/logs/nginx.pid of the currently launched nginx

# working mode and maximum number of connections events {worker_connections 1024; # maximum number of concurrent links for a single background worker process process}

# configure the http server, use its reverse proxy function to provide load balancing support http {# set mime type (mail support type), the type is defined by the mime.types file, include Drangelink tools, nginx, 1.10.1, confession, mime.types; default_type application/octet-stream

# set log _ format main'[$remote_addr]-[$remote_user] [$time_local] "$request"'$status $body_bytes_sent "$http_referer"'"$http_user_agent"$http_x_forwarded_for"; access_log D:/Tools/nginx-1.10.1/logs/access.log main;rewrite_log on The # sendfile instruction specifies whether nginx calls the sendfile function (zero copy mode) to output the file. For ordinary applications, # must be set to on, and if it is used for downloading and other application disk IO heavy load applications, it can be set to off to balance the processing speed of the disk and the network uptime.sendfile on;#tcp_nopush on;# O, and reduce the system's uptime.sendfile on;#tcp_nopush on;# connection timeout keepalive_timeout 120 # gzip compression switch # gzip on;# sets the actual server list upstream zp_server1 {server 127.0.0.1 server 8089;} # HTTP server server {# listens on port 80, which is a well-known port number for the HTTP protocol listen 80; # defines using www.xx.com to access server_name www.helloworld.com # Home page index index.html # points to webapp's directory root D:\ 01_Workspace\ Project\ github\ zp\ SpringNotes\ spring-security\ spring-shiro\ src\ main\ webapp; # Encoding format charset utf-8; # proxy configuration parameter proxy_connect_timeout 180; proxy_send_timeout 180; proxy_read_timeout 180; proxy_set_header Host $host Proxy_set_header X-Forwarder-For $remote_addr; # reverse proxy path (and upstream binding). Set the mapping path location / {proxy_pass http://zp_server1;} # static file after location, and nginx handles location ~ ^ / (images | javascript | js | flash | media | static) / {root D:\ 01_Workspace\ Project\ github\ zp\ SpringNotes\ spring-security\ spring-shiro\ src\ main\ webapp\ views # 30 days after expiration, static files are not updated very often. Expiration can be set to a larger size, or smaller if updated frequently. Expires 30d;} # set the address location / NginxStatus {stub_status on; access_log on; auth_basic "NginxStatus" for viewing Nginx status; auth_basic_user_file conf/htpasswd;} # prohibits access to .htxxx files location ~ /\ .ht {deny all } # error handling page (optional) # error_page 404 / 404.html; # error_page 500502 503504 / 50x.html; # location = / 50x.html {# root html; #}}

All right, let's try it:

Start webapp and note that the port that starts the binding should be consistent with the port set by upstream in nginx.

Change host: add a DNS record to the host file in the C:\ Windows\ System32\ drivers\ etc directory

127.0.0.1 www.helloworld.com

Start the startup.bat command in the previous article to access http://www.helloworld.com in the browser, and not surprisingly, you can already access it.

After reading the above, have you mastered the method of how to analyze the entry-level knowledge points of Nginx? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report