In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Root directory and index file
The root directive specifies the root directory that will be used to search for files. To get the path to the requested file, NGINX appends the request URI to the path specified by the root directive. This directive can be placed at any level in the context of http {}, server {}, or location {}. In the following example, a root directive is defined for the virtual server. It applies to all location {} blocks that do not contain a root instruction to explicitly redefine the root:
Server {root / www/data; location / {} location / images/ {} location ~\. (mp3 | mp4) {root / www/media;}}
Here, NGINX searches for files in the / www/ data/images/ directory of the file system for URI that starts with / images/. If URI ends with an .mp3 or .mp4 extension, NGINX searches for the file in the / www/media/ directory because it is defined in a matching location block.
If the request ends with /, NGINX treats it as a request for the directory and attempts to find the index file in the directory. The index directive defines the name of the index file (the default is index.html). To continue the example, if the request URI is / images/some/path/, NGINX returns the file / www/data/images/some/path/index.html (if it exists). If not, NGINX returns a HTTP 404 error by default (not found). To configure NGINX to return an automatically generated list of directories, include the on parameter in the autoindex directive:
Location / images/ {autoindex on;}
You can list multiple file names in the index directive. NGINX searches for files in the specified order and returns the first file it finds.
Location / {index index.$geo.html index.htm index.html;}
The $geo variable used here is a custom variable set through the geo directive. The value of the variable depends on the client's IP address.
To return the index file, NGINX checks for its existence and then internally redirects the new URI obtained by appending the name of the index file to the underlying URI. Internal redirection results in a new search for a location and may end up in another location, as shown in the following example:
Location / {root / data; index index.html index.php;} location ~\ .php {fastcgi_pass localhost:8000; #...}
Here, if the URI in the request is / path/, and / data/path/index.html does not exist but / data/path/index.php exists, the internal redirection to / path/index.php will be mapped to the second location. As a result, the agent is requested.
Try several options
The try_files directive can be used to check whether the specified file or directory exists; NGINX performs an internal redirection and, if not, returns the specified status code. For example, to check whether the file corresponding to the request URI exists, use the try_files directive and the $uri variable, as follows:
Server {root / www/data; location / images/ {try_files $uri / images/default.gif;}}
This file is specified in the form of URI and is processed using root or alias instructions set in the context of the current location or virtual server. In this case, if the file corresponding to the original URI does not exist, NGINX redirects internally to the URI specified by the last parameter and returns / www/data/images/default.gif.
The last parameter can also be a status code (which begins directly with an equal sign) or a location name. In the following example, a 404 error is returned if all parameters of the try_files directive do not resolve to an existing file or directory.
Location / {try_files $uri $uri/ $uri.html = 404;}
In the next example, if neither the original URI nor the URI with an appended trailing slash resolves to an existing file or directory, the request is redirected to the specified location and passed to the proxy server.
Location / {try_files $uri $uri/ @ backend;} location @ backend {proxy_pass http://backend.example.com;}
For more information, watch the content caching webinar to learn how to significantly improve website performance and learn more about NGINX's caching capabilities.
Optimize the performance of service content
Loading speed is a key factor in providing any content. Minor optimizations to the NGINX configuration can increase productivity and help achieve optimal performance.
Enable sendfile
By default, NGINX handles the file transfer itself and copies the file to a buffer before sending it. Enabling the sendfile directive eliminates the step of copying data to a buffer and allows data to be copied directly from one file descriptor to another. Or, to prevent a fast connection from fully occupying the worker process, you can use the sendfile_max_chunk directive to limit the amount of data transferred in a single sendfile () call (in this case, 1 MB):
Location / mp3 {sendfile on; sendfile_max_chunk 1m; #...}
Enable tcp_nopush
Use the tcp_nopush instruction with the sendfile on; instruction. This allows NGINX to send HTTP response headers in a packet immediately after sendfile () gets the block.
Location / mp3 {sendfile on; tcp_nopush on; #...}
Enable tcp_nodelay
The tcp_nodelay instruction allows overriding the Nagle algorithm, which was originally designed to solve the problem of small packets in slow networks. The algorithm combines many small packets into one larger packet and sends the packet with a delay of 200 milliseconds. Today, when providing large static files, data can be sent immediately regardless of the size of the packet. Delays can also affect online applications (ssh, online games, online transactions, etc.). By default, the tcp_nodelay directive is set to on, which means that Nagle's algorithm is disabled. This directive is for keepalive connections only:
Location / mp3 {tcp_nodelay on; keepalive_timeout 65; #...}
Optimize the backlog queue
One of the important factors is how quickly NGINX can handle incoming connections. The general rule is to place a connection in the "listen" queue of the listening socket when it is established. Under normal load, queues are small or have no queues at all. However, under high load, the queue will grow rapidly, resulting in uneven performance, broken connection and increased delay.
Show the backlog queue use the command netstat-Lan to display the current listening queue. The output may be shown below, which shows that there are 10 unaccepted connections in the listening queue on port 80 for a configured maximum of 128 queued connections. This is a normal situation.
Current listen queue sizes (qlen/incqlen/maxqlen) Listen Local Address 0 Universe 128 * .12345 10 Uniqure 0Thirty eight * .80 Uniqpany 0Uniplet128 * .8080
In contrast, in the following command, the number of unaccepted connections (192) exceeds the limit of 128. This is common when there is a lot of traffic on the site. For best performance, you need to increase the maximum number of connections that can be queued for NGINX acceptance in the operating system and NGINX configuration.
Current listen queue sizes (qlen/incqlen/maxqlen) Listen Local Address 0Accord 0128 * .12345 192 Compact 0Thirty-eight * .80Uniplet0Splash 128 * .8080
Adjust the operating system
Increase the value of the net.core.somaxconn kernel parameter from its default value (128) to a value large enough to accommodate a large amount of traffic. In this case, it increases to 4096.
The command for FreeBSD is sudo sysctl kern.ipc.somaxconn=4096Linux, and the command is 1. Sudo sysctl-w net.core.somaxconn=4096 2. Add net.core.somaxconn = 4096 to the / etc/sysctl.conf file.
Adjust NGINX
If you set the somaxconn kernel parameter to a value greater than 512, add the backlog parameter to the NGINX listen directive to match the modification:
Server {listen 80 backlog=4096; #...}
The ©article is translated from Nginx Serving Static Content, with some semantic adjustments.
The above is the whole content of this article, I hope it will be helpful to your study, and I also hope that you will support it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.