In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
Today, the editor will share with you the relevant knowledge points of Nginx Quick start case analysis, the content is detailed and the logic is clear. I believe most people still know too much about this knowledge, so share this article for your reference. I hope you can get something after reading this article. Let's learn about it together.
Why use nginx?
At present, the main competitor of nginx is apache. Here the editor makes a simple comparison between the two to help you better understand the advantages of nginx.
1. As web server:
Compared with apache,nginx, it uses fewer resources, supports more concurrent connections, and embodies higher efficiency, which makes nginx especially popular among web hosting providers. In the case of high connection concurrency, nginx is a good substitute for apache server: nginx is one of the software platforms often chosen by owners of the virtual hosting business in the United States, which can support responses of up to 50000 concurrent connections. Thank nginx for choosing epoll and kqueue as the development model for us.
Nginx serves as a load balancing server: nginx can not only directly support rails and php programs to serve externally, but also support external services as http proxy servers. Nginx is written in c, which is much better than perlbal in terms of system resource overhead and cpu usage efficiency.
2. Nginx configuration is simple and apache is complex:
Nginx is easy to start and can run almost continuously from 7 to 24, even if it runs for several months without a restart. You can also upgrade the software version without interruption of service.
The static processing performance of nginx is more than 3 times higher than that of apache. It is relatively simple for apache to support php. Nginx needs to be used with other backends. Apache has more components than nginx.
3. The core difference is:
Apache is a synchronous multi-process model, with one connection corresponding to one process; nginx is asynchronous, and multiple connections (ten thousand levels) can correspond to one process.
4. Their areas of expertise are as follows:
The advantage of nginx is that it handles static requests, the memory utilization of cpu is low, and apache is suitable for handling dynamic requests, so now the front end generally uses nginx as the reverse proxy to withstand the pressure, and apache as the back end to handle dynamic requests.
Basic usage of nginx
System platform: centos release 6.6 (final) 64-bit.
Install compilation tools and library files
Second, first install pcre
1. The function of pcre is to make nginx support rewrite function. Download the pcre installation package at:
2. Extract the installation package:
3. Enter the installation package directory
4. Compile and install
5. View the pcre version
Third, install nginx
1. Download nginx at:
2. Extract the installation package
3. Enter the installation package directory
4. Compile and install
5. View the nginx version
At this point, the nginx installation is complete.
IV. Nginx configuration
Create the user www used by the nginx run:
Configure nginx.conf to replace / usr/local/webserver/nginx/conf/nginx.conf with the following
Check the correctness of the configuration file ngnix.conf command:
5. Start nginx
The nginx startup command is as follows:
6. Visit the site
Access our configured site ip from the browser:
Description of nginx common instructions
1. Main global configuration
Some parameters that nginx has nothing to do with specific business functions (such as http service or email service proxy) at run time, such as the number of working processes, the identity of running, and so on.
Woker_processes 2
In the top-level main section of the configuration file, the number of worker processes of the worker role, and the master process receives and assigns requests to worker for processing. This value can be simply set to the number of cores grep ^ processor / proc/cpuinfo of cpu | wc-l, which is also the auto value. If ssl and gzip are enabled, it should be set to the same or even 2 times the number of logical cpu, which can reduce the number of cpu operations. If the nginx server has other services, you can consider an appropriate reduction.
Worker_cpu_affinity
Is also written in the main section. In the case of high concurrency, the performance loss caused by field reconstruction such as registers caused by multi-cpu core switching can be reduced by setting cpu viscosity. Such as worker_cpu_affinity 0001 0010 0100 1000; (quad core).
Worker_connections 2048
Write it in the events section. The maximum number of connections that each worker process can handle (initiate) concurrently (including all connections to the client or back-end proxied server). Nginx, as a reverse proxy server, calculates the maximum number of connections in the formula = worker_processes * worker_connections/4, so the maximum number of client connections here is 1024. It doesn't matter if it can be increased to 8192, depending on the situation, but cannot exceed the following worker_rlimit_nofile. When nginx is the http server, the formula is divided by 2.
Worker_rlimit_nofile 10240
Write it in the main section. The default is not set, which can be limited to the operating system's maximum limit of 65535.
Use epoll
Write it in the events section. Under the linux operating system, nginx uses the epoll event model by default. Thanks to this, nginx is quite efficient under the linux operating system. At the same time, nginx adopts kqueue, an efficient event model similar to epoll, on openbsd or freebsd operating systems. Use select only when the operating system does not support these efficient models.
2. Http server
Some configuration parameters related to providing http services. For example: whether to use keepalive ah, whether to use gzip for compression and so on.
Sendfile on
Turn on the efficient file transfer mode, and the sendfile instruction specifies whether nginx calls the sendfile function to output files, reducing the context switch from user space to kernel space. For general applications, set it to on. If it is used for downloading and other application disk io heavy-loaded applications, it can be set to off to balance the processing speed of disk and network iUnix and reduce the load of the system.
Keepalive_timeout 65: long connection timeout (in seconds). This parameter is very sensitive. It involves the type of browser, the timeout setting of the back-end server, and the setting of the operating system. When a persistent connection requests a large number of small files, the overhead of rebuilding the connection can be reduced, but if a large file is uploaded, it will fail if the upload is not completed within 65 seconds. If the setup time is too long and there are many users, keeping the connection for a long time will take up a lot of resources.
Send_timeout: used to specify the timeout for the response client. This timeout is limited to the time between two connection activities. If the client has no activity beyond this time, nginx will close the connection.
Client_max_body_size 10m
The maximum number of bytes of a single file that the client is allowed to request. If a large file is uploaded, please set its limit value
Client_body_buffer_size 128k
Maximum number of bytes requested by the buffer proxy buffer client
Module http_proxy:
This module implements the function of nginx as a reverse proxy server, including caching (see article)
Proxy_connect_timeout 60
Nginx connection timeout with backend server (proxy connection timeout)
Proxy_read_timeout 60
Timeout between two successful response operations with the back-end server after a successful connection (proxy receive timeout)
Proxy_buffer_size 4k
Set the buffer size of the proxy server (nginx) to read and save account information from the backend realserver. By default, it is the same as the proxy_buffers. In fact, you can set this instruction value a little smaller.
Proxy_buffers 4 32k
Proxy_buffers buffer, nginx caches responses from the back-end realserver for a single connection, if the average page size is less than 32k, set this
Proxy_busy_buffers_size 64k
Buffer size under high load (proxy_buffers*2)
Proxy_max_temp_file_size
When proxy_buffers cannot hold the response content of the back-end server, it will save some of it to the temporary file on the hard disk. This value is used to set the maximum temporary file size. The default is 1024m, which has nothing to do with proxy_cache. Greater than this value, it will be returned from the upstream server. Set to 0 to disable.
Proxy_temp_file_write_size 64k
This option limits the size of the temporary file per write when the server that caches the proxy responds to the temporary file. Proxy_temp_path (at compile time) specifies which directory to write to.
Proxy_pass,proxy_redirect see the location section.
Module http_gzip:
Gzip on: enable gzip to compress output and reduce network transmission.
Gzip_min_length 1k: sets the minimum number of bytes of pages allowed to be compressed. The number of page bytes is obtained from the content-length of the header header. The default value is 20. It is recommended to set the number of bytes greater than 1k. Less than 1k may increase the pressure.
Gzip_buffers 4 16k: set up the system to get several units of cache to store the compressed result data stream of gzip. 4 16k represents 4 times the applied memory in 16k units of the original data size.
Gzip_http_version 1.0: used to identify the version of the http protocol, early browsers do not support gzip compression, users will see garbled, so in order to support the previous version added this option, if you use the reverse proxy of nginx and expect to enable gzip compression, because the end communication is http/1.0, please set it to 1.0.
Gzip_comp_level 6: gzip compression ratio, 1 compression ratio minimum processing speed is the fastest, 9 compression ratio is the largest but processing speed is the slowest (transmission is fast but consumes cpu)
Gzip_types: matches the mime type for compression, and the "text/html" type is always compressed, whether specified or not.
Gzip_proxied any: when nginx is enabled as a reverse proxy, it determines whether the results returned by the backend server are compressed or not. The prerequisite for matching is that the backend server must return a header header containing "via".
Gzip_vary on: related to the http header, a vary: accept-encoding is added to the response header, which allows the front-end cache server to cache gzip-compressed pages, for example, nginx-compressed data with squid.
3. Server virtual host
Several virtual hosts are supported on the http service. Each virtual host has a corresponding server configuration item, which contains the configuration related to the virtual host. Several server can also be established when providing proxies for mail services. Each server is distinguished by listening for addresses or ports.
Listen
The listening port is 80 by default, and those less than 1024 should be started with root. It can be in the form of listen *: 80, listen 127.0.0.1 and so on.
Server_name
Server names, such as localhost, www.example.com, can be matched by regular matching.
Module http_stream
This module uses a simple scheduling algorithm to achieve load balancing from the client ip to the back-end server. The upstream is followed by the name of the load balancer, and the back-end realserver is organized in {} by host:port options;. If only one backend is proxied, it can also be written directly in proxy_pass.
4. Location
In a http service, a series of configuration items corresponding to certain url.
Root / var/www/html
Define the default site root location for the server. If locationurl matches a subdirectory or file, root is useless and is usually placed inside or / under the server instruction.
Index index.jsp index.html index.htm
Define the file name accessed by default under the path, which is usually followed by root
Proxy_pass http:/backend
The request goes to the list of servers defined by backend, that is, the reverse proxy, which corresponds to the upstream load balancer. You can also proxy_pass http://ip:port.
Proxy_redirect off
Proxy_set_header host $host
Proxy_set_header x-real-ip $remote_addr
Proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for
These four are set up like this for the time being. if you delve into them, each of them involves very complex content and will be interpreted through another article.
With regard to the writing of location matching rules, it can be said that it is particularly critical and basic. Refer to the article nginx configuration location summary and rewrite rule writing.
5. Other
5.1 access Control allow/deny
The access control module of nginx is installed by default, and it is very simple to write. It can have multiple allow,deny, allow or prohibit access to a certain ip or ip segment, and stop matching if any rule is satisfied in turn. Such as:
We also use the htpasswd of the httpd-devel tool to set the login password for the access path:
This generates a password file that is encrypted using crypt by default. Open the comments on the above two lines of nginx-status, and restart nginx takes effect.
5.2 list directory autoindex
Nginx does not allow entire directories to be listed by default. If you need this feature, open the nginx.conf file and add the other two parameters of autoindex on;, to the location,server or http section.
Autoindex_exact_size off; defaults to on, which shows the exact size of the file, in bytes. After changing to off, the approximate size of the file is displayed, in kb or mb or gb autoindex_localtime on
The default is off, and the file time displayed is gmt time. When changed to on, the file time displayed is the server time of the file
These are all the contents of the article "Nginx Quick start case Analysis". Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.