Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of deploying static pages using Nginx

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Nginx introduction

Nginx is a very lightweight HTTP server written by Russians, Nginx, pronounced "engine X", is a high-performance HTTP and reverse proxy server, but also an IMAP/ POP3/ SMTP proxy server. Nginx was developed by Russian Igor Sysoev for the second most visited Rambler.ru site in Russia, and it has been running on the site for more than two and a half years. Igor Sysoev uses a BSD-based license when building a project.

English homepage: http://nginx.net.

As a HTTP server, Nginx has the following basic characteristics:

Handle static files, index files, and automatic indexing; turn on file descriptor buffering. Reverse proxy acceleration without caching, simple load balancing and fault tolerance. FastCGI, simple load balancing and fault tolerance. Modular structure. Including gzipping, byte ranges, chunked responses, and SSI-filter and other filter. If Fast CGI or another proxy server processes multiple SSI that exist in a single page, the process can be run in parallel without waiting for each other. SSL and TLSSNI are supported.

That is, the advantages of Nginx: lightweight, high performance, strong concurrency. It is also quite convenient to deploy static pages.

This high performance benefits from Nginx's framework. After Nginx starts, there will be one master process and multiple worker processes. The master process is mainly used to manage the worker process, including: receiving signals from the outside world, sending signals to each worker process, monitoring the running status of the worker process, and automatically restarting the new worker process when the worker process exits (under abnormal circumstances). The basic network events are handled in the worker process. Multiple worker processes are peer-to-peer, they compete equally for requests from the client, and the processes are independent of each other. A request can only be processed in one worker process, and it is impossible for a worker process to process requests from other processes. The number of worker processes can be set. Generally, we will set the same number of cores as the machine cpu, which is related to the process model and event handling model of Nginx.

Why choose Nginx

When it comes to Nginx, the first reaction may be reverse proxy and load balancing. So what is reverse proxy and what is load balancing?

Reverse proxy

First of all, take a look at what a forward agent is. Proxy, also known as network agent, is a special network service. Generally speaking, it acts as a middleman between the client and the target server, receives the request from the client, initiates the corresponding request to the target server according to the client request, and returns to the client after obtaining the specified resources from the target server. And the proxy server can download the resources of the target server to the local cache. If the resources to be acquired by the client are in the cache of the proxy server, the proxy server will no longer initiate a request to the target server. Instead, the cached resources are returned directly.

In fact, proxy servers are very common. For example, some agents who access the Internet scientifically because of GWF use foreign servers as proxy servers to correctly resolve domain names to achieve scientific access to the Internet. Proxy servers can also hide the real IP. For example, the famous Tor (Onion Router) uses multiple proxies and some encryption techniques to communicate anonymously.

The reverse proxy is used as a proxy on the server side, not on the client side. That is to say, the forward proxy is to proxy the connection request of the internal network user to access the server on the Internet, and the reverse proxy is to accept the connection request on the Internet with the proxy server, then forward the request to the server on the internal network, and return the result obtained from the server to the client requesting the connection on the Internet. At this time, the proxy server is externally represented as a server.

Load balancing

Reverse proxy load balancing technology is to dynamically forward connection requests from Internet to multiple servers on the internal network in the way of reverse proxy for processing, so as to achieve the purpose of load balancing.

What a coincidence. Nginx did it all.

As an excellent proxy server, Nginx must have both reverse proxy and load balancing. For a more detailed understanding of this knowledge and how to use it, see the reference given at the end of the article: Nginx getting started Guide.

Nginx installation

I use a Tencent Cloud server with a version of Ubuntu Server 14.04.1 LTS 32-bit.

$apt-get install nginx

The Mac OS system refers to this article: Installing Nginx in Mac OS X

Nginx configuration

Simply configure the configuration file for Nginx to enable these configurations when you start Nginx. And the focus of this article is also here.

The configuration system of Nginx consists of a main configuration file and other auxiliary configuration files. These configuration files are plain text files. In general, we only need to configure the main configuration file. For example, on my server it is at: / etc/nginx/nginx.conf.

Instruction context

The configuration information in nginx.conf is classified according to its logical meaning, that is, it is divided into multiple scopes, or configuration instruction context. Different scopes contain one or more configuration items.

Each configuration item consists of configuration instructions and instruction parameters, forming a key-value pair, followed by a comment, which is also very easy to understand.

The structure and general configuration of the general configuration file are as follows:

User www-data; # runs the group and owner of nginx worker_processes 1; # starts a nginx worker process. Generally, a few cores of CPU can write a few pid / run/nginx.pid; # pid paths events {worker_connections 768; # A process can process 768 requests at the same time # multi_accept on } # configuration parameters related to the provision of http services are generally available by default. The main configuration lies in the server context http {# Basic Settings # # in the http context. The general default configuration # Logging Settings # #... The general default configuration # Gzip Settings # #... The general default configuration # nginx-naxsi config # #... The general default configuration # nginx-passenger config # #... The general default configuration # Virtual Host Configs # #... Omit the general default configuration # here, add the server context here, and start configuring a domain name. A server configuration segment generally corresponds to a domain name server {listen 80; # listening on port 80 server_name _ on all local ip; # Domain name: www.example.com where "_" means to get and match all root / home/filename/ # site root directory location / {# can have multiple location to configure the routing address try_files index.html = 404;}} # mailbox. Because it is not needed, comment out the mail context # mail {# See sample authentication script at:# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript# auth_http localhost/auth.php # # pop3_capabilities "TOP"USER"; # # imap_capabilities "IMAP4rev1"UIDPLUS"; # # server {# listen localhost:110;# protocol pop3;# proxy on;#} # server {# listen localhost:143;# protocol imap;# proxy on;#} #}

What needs to be noticed here is the server context in the http context.

Server {listen 80; # listen on port 80 server_name _; # domain name: www.example.com where "_" means to get matching all root / home/filename/; # site root directory location / {# multiple location can be used to configure routing address try_files index.html = 404;}}

The root field here is best written outside the location field to prevent css and js from being loaded. Because the loading of css and js is not automatic, nginx cannot be executed, and additional configuration is required to return resources, so this is most convenient for static page deployment.

Root is further explained here, for example, there is a / home/zhihu/ directory on the server, under which there are index.html files, css/ and img/, and root / home/zhihu/; will specify that the server will look under / home/zhihu/ when loading resources.

Secondly, there are many kinds of matching after location, and the priority of all kinds of matching methods are different. Here is an example of an exact match:

Server {listen 80; server_name _; root / home/zhihu/; location = / zhihu {rewrite ^ /. * / break; try_files index.html = 404;}}

At this point, accessing www.example.com/zhihu will load the zhihu.html. Because of the exact match of location, the correct response will only be obtained when the route www.example.com/zhihu is accessed, and the / zhihu parsing will be replaced with the original / through rewrite regular matching. For more information on the use of the location field, you can see it in the Resources given at the end of the article.

The easiest and easiest way to configure static pages using nginx

There are a lot of instructions about configuration above, but I recommend a configuration method that I think is the most convenient. (I would like to thank senior guyskk for his questions and questions.)

First, create a directory, such as / home/ubuntu/website, and then place the static page files you need to deploy under this website folder. For example, I have google, zhihu and fenghuang folders under website, where the server field is configured as follows:

Server {listen 80; server_name _; root / home/ubuntu/website; index index.html;}

The static page file name under each folder here is index.html. I used to have a very bad habit. For example, zhihu pages like to be named zhihu.html, but from the front end, this is not in line with the specification.

In this way, for example, when you visit www.showzeng.cn/google/, nginx will go to the google folder under the website directory to find index.html and return the google page. Similarly, when you visit www.showzeng.cn/zhihu/, you will find the index.html under the zhihu folder and return the zhihu page.

In the same directory of the zhihu, google and fenghuang folders, when you add your domain name home page index.html, it will be returned when you visit www.example.com.

The only drawback here is that the access domain name will automatically add / at the end of the www.showzeng.cn/zhihu. If you press F12 to debug in the browser, you will find that www.showzeng.cn/zhihu is 301 status code. Because index.html is under the zhihu/ folder, it will be redirected to www.showzeng.cn/zhihu/ during the search process. At first, I can't accept it, and that one / looks too uncomfortable. But as soon as I thought of a location field to match, I immediately accepted it. I don't know what you think, but I accepted it.

Nginx starts running

$sudo nginx-s reload

Use the reload method to reload the configuration file without restarting the service, so that the client does not feel the service exception and realizes a smooth switch. Of course, you can also restart the nginx service.

$sudo service nginx restart

Nginx stops running

$sudo nginx-s stop

references

Nginx getting started Guid

Nginx for Developers: An Introduction

Nginx configuration location Summary and rewrite Rule Writing

The above is the whole content of this article, I hope it will be helpful to your study, and I also hope that you will support it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report