In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article shows you what the installation and configuration of Nginx is. The content is concise and easy to understand. It will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
Overview of nginx
Nginx is a free, open source, high-performance HTTP server and reverse proxy server; it is also an IMAP, POP3, SMTP proxy server; nginx can be used as a HTTP server to publish websites, and nginx can be used as a reverse proxy for load balancing.
Here is a brief introduction to nginx from three aspects.
Reverse proxy
Load balancing
Characteristics of nginx
1. Reverse proxy
About Agent
When it comes to agency, first of all, we have to make clear a concept. The so-called agency is a representative and a channel.
At this time, two roles are designed, one is the agent role, the other is the target role, the process in which the agent role accesses the target role to complete some tasks through this agent is called the agent operation process; just like the store in life ~ the guest buys a pair of shoes at the adidas store, the store is the agent, the agent role is the adidas manufacturer, and the target role is the user
Forward agent
Before we talk about reverse proxy, let's take a look at the forward agent. Forward agent is also the most common agent model that everyone comes into contact with. We will talk about the processing mode of forward agent from two aspects. Explain what is a forward agent from the aspects of software and life respectively.
In today's network environment, if we need to visit some foreign websites because of technical needs, you will find that there is no way for us to visit a website located abroad through a browser. At this time, everyone may use an operation FQ to visit. The main way to FQ is to find a proxy server that can access foreign websites, and we will send the request to the proxy server. Proxy server to visit foreign websites, and then pass the data to us!
The most important feature of the forward proxy is that the client is very clear about the server address to be accessed; the server only knows which proxy server the request comes from, but not which specific client; the forward proxy mode shields or hides the real client information.
Reverse proxy
Now that we understand what forward proxy is, we continue to look at the handling of reverse proxy. For example, the number of visitors who connect to the website at the same time every day has exploded. A single server is far from being able to meet the people's growing desire to buy. At this time, there is a familiar term: distributed deployment. That is, to solve the problem of restrictions on the number of visitors by deploying multiple servers; most of the functions in a treasure website are also realized directly by using nginx for reverse proxy, and after encapsulating nginx and other components, they have a high-end name: Tengine. Interested children's shoes can visit Tengine's official website to view specific information: http://tengine.taobao.org/.
So how does the reverse proxy implement the distributed cluster operation? let's take a look at a schematic diagram:
Through the above illustration, you can see clearly that the requests sent by multiple clients to the server are received by the nginx server and distributed to the back-end business processing server for processing according to certain rules. At this point, the source of the request, that is, the client, is clear, but it is not clear which server handles the request. Nginx plays the role of a reverse proxy.
Reverse proxy, mainly used for distributed deployment of server clusters, reverse proxy hides the information of the server!
Project scene
In general, when we operate the actual project, the forward proxy and reverse proxy are likely to exist in an application scenario, the forward proxy client requests to access the target server, and the target server is a reverse single-interest server, which proxies multiple real business processing servers. The specific topology diagram is as follows:
two。 Load balancing
We have clarified the concept of the so-called proxy server, so next, nginx plays the role of a reverse proxy server. According to what rules does it request distribution? For unused project application scenarios, can the rules of distribution be controlled?
The number of requests sent by the client and received by the nginx reverse proxy server mentioned here is what we call the load.
The rule that the number of requests is distributed to different servers according to certain rules is a balance rule.
Therefore, the process of distributing requests received by the server according to the rules is called load balancing.
Load balancing in the actual project operation process, there are two kinds of hardware load balancing and software load balancing, hardware load balancing is also known as hard load, such as F5 load balancing, which is relatively expensive and expensive, but the stability and security of data are very well guaranteed. Companies such as China Mobile China Unicom will choose hard load to operate. Taking into account the cost, more companies will choose to use software load balancing, which is a message queue distribution mechanism implemented by existing technology combined with host hardware.
The load balancing scheduling algorithms supported by nginx are as follows:
Weight polling (default): received requests are assigned to different backend servers one by one in order. Even if a backend server goes down during use, nginx will automatically remove the server from the queue, and the acceptance of requests will not be affected. In this way, you can set a weight value (weight) for different backend servers to adjust the allocation rate of requests on different servers. The larger the weight data, the greater the probability of being assigned to the request. The weight value is mainly adjusted for different backend server hardware configurations in the actual working environment.
Ip_hash: each request is matched according to the hash result of the ip of the initiating client. Under this algorithm, a client with a fixed ip address will always access the same backend server, which solves the problem of session sharing in the cluster deployment environment to some extent.
Fair: intelligent adjustment scheduling algorithm, which dynamically allocates evenly according to the time from request processing to response of the back-end server. Servers with short response time and high efficiency have a high probability of allocation to requests, while servers with long response time and low efficiency allocate fewer requests. A scheduling algorithm that combines the advantages of the former two. But it should be noted that nginx does not support fair algorithm by default. If you want to use this scheduling algorithm, please install the upstream_fair module.
Url_hash: assign requests according to the hash result of the accessed url. The url of each request will be directed to a server fixed at the back end, which can improve cache efficiency when nginx is used as a static server. It is also important to note that nginx does not support this scheduling algorithm by default. To use it, you need to install nginx's hash package.
Nginx installation 1. Windows installation
Download address of the official website:
Https://nginx.org/en/download.html
As shown in the figure below, download the corresponding version of the nginx package and unzip it to the folder where the software is stored on your computer.
After the decompression is completed, the file directory structure is as follows:
Start nginx
1) double-click the nginx.exe in this directory directly to start the nginx server
2) count the command line into the folder, execute the nginx command, and start the nginx server directly
D:/resp_application/nginx-1.13.5 > nginx
Visit nginx
Open the browser, enter the address: http://localhost, and visit the page. The following page indicates that the visit is successful.
Stop nginx
On the command line, go to the nginx root directory and execute the following command to stop the server:
# forcibly stop the nginx server. If there is any unprocessed data, discard D:/resp_application/nginx-1.13.5 > nginx- s stop
# stop the nginx server gracefully. If there is any unprocessed data, stop D:/resp_application/nginx-1.13.5 > nginx- s quit after the processing is completed.
2. Ubuntu installation
As the normal software is installed, install it directly through the following command:
$sudo apt-get install nginx
The installation is complete. In the / usr/sbin/ directory is the directory where the nginx command is located, and in the / etc/nginx/ directory are all the configuration files of nginx, which are used to configure nginx server, load balancing and other information.
Check to see if the nginx process starts
$ps-ef | grep nginx
Nginx automatically creates the corresponding number of processes based on the number of cores of the current host's CPU (the current ubuntu host is configured with 2 cores and 4 threads)
> Note: the service process started here is actually four processes, because when the nginx process starts, it comes with a daemon to protect the official process from abnormal termination. If the daemon process finds that the nginx process has been terminated, it will restart the process automatically. Daemons are generally called master processes, and business processes are called worker processes
Start the nginx server command
Executing nginx directly will start the server according to the default configuration file.
$nginx
Stop nginx service command
Like the execution process of the windows system, there are two ways to stop
$nginx-s stop or $nginx-s quit
Restart loading
You can also use the commands reopen and reload to restart nginx or reload matching files.
3. Mac os installation
It is possible to install nginx directly through brew, or download a tar.gz package.
Install directly through brew
Brew install nginx
After the installation is completed, the subsequent command operations, server startup, process viewing, server stop, server restart have been file loading commands are consistent.
Nginx configuration
Nginx is not only a very powerful web server plus reverse proxy server, but also a mail server and so on.
The three core functions most frequently used in the project are reverse proxy, load balancing and static server.
The use of these three different functions is closely related to the configuration of nginx. The configuration information of nginx server is mainly concentrated in the configuration file nginx.conf, and all the configurable options are roughly divided into the following sections.
Main # Global configuration
Events {# nginx working mode configuration
}
Http {# http settings....
Server {# server host configuration.... Location {# routing configuration.... } location path {.... } location otherpath {.... }} server {.... Location {.... }} upstream name {# load balancer configuration.}
}
As shown in the above configuration file, it mainly consists of six parts:
Main: used to configure nginx global information
Events: configuration for nginx operating mode
Http: some configurations for http protocol information
Server: configuration for server access information
Location: configuration for access routin
Upstream: configuration for load balancing
Main module
Observe the following configuration code
# user nobody nobody; worker_processes 2; # error_log logs/error.log # error_log logs/error.log notice # error_log logs/error.log info # pid logs/nginx.pid worker_rlimit_nofile 1024
The above configurations are all configuration items stored in the main global configuration module.
User is used to specify nginx worker process running users and user groups. Nobody account runs by default.
Worker_processes specifies the number of child processes to be enabled by nginx, and monitors the memory consumption of each process during operation (usually several to dozens of M) to adjust according to the actual situation, usually the number is an integral multiple of the number of CPU kernels.
Error_log defines the location and output level of the error log file [debug / info / notice / warn / error / crit]
Pid is used to specify the location where files are stored in the process id
Worker_rlimit_nofile is used to specify a description of the maximum number of files a process can open.
Event module
Get practical information.
Event {worker_connections 1024; multi_accept on; use epoll;}
The above configuration is some operational configuration for the working mode of the nginx server.
Worker_connections specifies the maximum number of connections that can be received at the same time. It is important to note here that the maximum number of connections is determined jointly with worker processes.
Multi_accept configuration specifies that nginx accepts as many connections as possible after receiving a new connection notification
Use epoll configuration specifies the method of thread polling. If linux2.6+, uses epoll, if it is BSD, such as Mac, use Kqueue.
Http module
As a web server, the http module is the core module of nginx, and there are many configuration items. Many actual business scenarios will be set in the project, which need to be configured appropriately according to the hardware information. In general, you can use the default configuration.
Http {# basic configuration # #
Sendfile on;tcp_nopush on;tcp_nodelay on;keepalive_timeout 65 name types\ _ hash\ _ max_size 2048 Taiwan # server_tokens off;# server\ _ names\ _ hash\ _ bucket\ _ size 64 * server\ _ name\ _ in_redirect off;include / etc/nginx/mime.types;default_type application/octet-stream;### SSL certificate configuration # # ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLEssl\ _ prefer\ _ server_ciphers on # Log configuration # # access_log / var/log/nginx/access.log;error_log / var/log/nginx/error.log;### Gzip Compression configuration # # gzip on;gzip_disable "msie6"; # gzip_vary on;# gzip_proxied any;# gzip\ _ comp\ _ level 6 gzip_buffers # gzip\ _ http\ _ version 1.1 * gzip_types text/plain text/css application/json application/javascript
Text/xml application/xml application/xml+rss text/javascript
# Virtual host configuration # # include / etc/nginx/conf.d/*.conf;include / etc/nginx/sites-enabled/*
Basic configuration
Sendfile on: configure on to make sendfile work, leaving the file write-back process to the data buffer rather than in the application, which is good for performance improvement tc_nopush on: let nginx send all the header files in a packet instead of sending a separate tcp_nodelay on: let nginx not cache data, but send it segment by segment If the data transmission has real-time requirements, you can configure it. You can get the return value immediately after sending a small piece of data, but don't abuse it.
Keepalive_timeout 10: assign a connection timeout to the client, after which the server closes the connection. Generally, the setting time is short, which can make nginx work more persistent. Client_header_timeout 10: set the timeout of the request header client_body_timeout 10: set the timeout of the request body send_timeout 10: specify the response timeout of the client. If the interval between two operations of the client exceeds this time, the server will close the link.
Limit_conn_zone $binary_remote_addr zone=addr:5m: set parameters to hold shared memory for various key, limit_conn addr 100: maximum number of connections for a given key setting
Server_tokens: although it will not make nginx execute faster, you can turn off the nginx version prompt on the error page, which is good for improving the security of the website. Include / etc/nginx/mime.types: specify the directive default_type application/octet-stream to include another file in the current file: specify that the file type to be processed by default can be binary type_hash_max_size 2048: obfuscate data and affect the conflict rate of three columns The higher the value, the more memory is consumed, the collision rate of hash key is reduced, and the retrieval speed is faster. The smaller the key value is, the less memory is occupied, the higher the conflict rate is, and the slower the retrieval speed is.
Log configuration
Access_log logs/access.log: sets the log for storing access records error_log logs/error.log: sets the log for storing errors
SSL certificate encryption
Ssl_protocols: directive is used to start a specific encryption protocol. Nginx defaults to ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2,TLSv1.1 and TLSv1.2 after 1.1.13 and 1.0.12. To ensure that OpenSSL > = 1.0.1, SSLv3 is still in use in many places, but there are many exploited vulnerabilities. Ssl prefer server ciphers: when negotiating encryption algorithms, priority is given to our server's cipher suite rather than the client browser's cipher suite
Compressed configuration
Gzip tells nginx to send data in the form of gzip compression. This will reduce the amount of data we send. Gzip_disable disables the gzip feature for the specified client. We set it to IE6 or lower to make our solution widely compatible. Gzip_static tells nginx to look for resources that have been pre-processed by gzip before compressing them. This requires you to pre-compress your files (commented out in this example), allowing you to use the highest compression ratio so that nginx no longer has to compress these files (for more detailed gzip_static information, click here). Gzip_proxied allows or disables compression of response streams based on requests and responses. We set it to any, which means that all requests will be compressed. Gzip_min_length sets the minimum number of bytes to enable compression for data. If a request is less than 1000 bytes, we'd better not compress it, because compressing these small data will slow down all processes processing the request. Gzip_comp_level sets the compression level of the data. This level can be any number between 1 and 9, which is the slowest but has the highest compression ratio. We set it to 4, which is a more eclectic setting. Gzip_type sets the data format that needs to be compressed. There are already some of the above examples, you can also add more formats.
File cache configuration
When open_file_cache turns on caching, it also specifies the maximum number of caches and the time for caching. We can set a relatively high maximum time so that we can clear them after they are inactive for more than 20 seconds. Open_file_cache_valid specifies the interval at which correct information is detected in open_file_cache. Open_file_cache_min_uses defines the minimum number of files during the period of inactivity of instruction parameters in open_file_cache. Open_file_cache_errors specifies whether error messages are cached when searching for a file, including adding files to the configuration again. We also include server modules, which are defined in different files. If your server module is not in these locations, you will have to modify this line to specify the correct location.
Server module
Srever module configuration is a sub-module of http module, which is used to define a virtual access host, that is, the configuration information of a virtual server.
Server {listen 80; server_name localhost 192.168.1.100; root / nginx/www; index index.php index.html index.html; charset utf-8; access_log logs/access.log; error_log logs/error.log;. }
The core configuration information is as follows:
Server: the configuration of one virtual host, and multiple server can be configured in one http
Server_name: force to specify ip address or domain name. Multiple configurations are separated by spaces.
Root: represents the root directory within the entire server virtual host, and the root directory of all web projects in the current host
Index: the global home page when a user visits a web site
Charset: used to set the default encoding format for web pages configured in the www/ path
Access_log: used to specify the access log storage path in the virtual host server
Error_log: used to specify the path where the error log is accessed in the virtual host server
Location module
Location module is the most common configuration in nginx configuration, which is mainly used to configure routing access information.
Reverse proxy, load balancing and other functions are associated in the configuration of routing access information, so the location module is also a very important configuration module.
Basic configuration
Location / {root / nginx/www; index index.php index.html index.htm;}
Location /: indicates matching access to the root directory
Root: used to specify the web directory of the virtual host when accessing the root directory
Index: the list of resource files displayed by default when access to a specific resource is not specified
Reverse proxy configuration mode
Make client access transparent through proxy_set configuration through reverse proxy server access mode
Location / {proxy_pass http://localhost:8888; proxy_set_header X-real-ip $remote_addr; proxy_set_header Host $http_host;}
Uwsgi configuration
Server configuration access mode in wsgi mode
Location / {include uwsgi_params; uwsgi_pass localhost:8888}
Upstream module
The upstream module is mainly responsible for the configuration of load balancer and distributes requests to back-end servers through default polling and scheduling.
Simple configuration method
Upstream name {ip_hash; server 192.168.1.100 down; server 8000; server 192.168.1.100 down; server 192.168.1.100 down; server 8002 max_fails=3; server 192.168.1.100 fail_timeout=20s; server 192.168.1.100 fail_timeout=20s; server 192.168.1.100 down; server 8004 max_fails=3 fail_timeout=20s;}
Core configuration information
Ip_hash: specifies the request scheduling algorithm. The default is weight weight polling scheduling, which can be specified.
Server host:port: list configuration for Distributor
-- down: indicates that the service of the host is suspended.
-- max_fails: indicates the maximum number of failures, and the service is suspended beyond the maximum number of failures
-- fail_timeout: if the request fails to be accepted, restart the request after pausing the specified time
The above content is what is the installation and configuration of Nginx. Have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.