Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of forward Agent and reverse Agent in nginx

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you the nginx forward agent and reverse agent example analysis, I believe that most people do not know much about it, so share this article for your reference, I hope you will learn a lot after reading this article, let's go to know it!

Forward agent

That is, suppose there is an intranet.

There are two machines in the intranet, of which only a can surf the Internet.

B can't surf the Internet, but an and b are connected through the network.

At this time, if b wants to access the public network, it can access the public network to the agent through a.

A forward proxy simulates the target server in the intranet and sends requests from other machines in the intranet

Forward to the real target server in the public network

So the forward agent accepts requests from other machines in the intranet.

The reverse proxy is the opposite.

It is also an internal network, with several machines, only one of which is connected to the external network.

However, the reverse proxy does not accept access requests from private network machines.

The reverse proxy accepts access requests from the public network.

Then forward the request to other machines in the intranet

The user who makes the request in the public network does not know who the reverse proxy server forwarded the request to.

To set up the function of a forward agent on a machine

Edit a nginx configuration file as shown in the figure

The figure above is the contents of the configuration file.

If you configure a server as a forward proxy server

Then the virtual host configuration file must be the default virtual host.

Because all network requests to access this machine should visit this virtual host first.

So you need to set default_server here.

Then modify the original default virtual host profile name

As shown in the figure, modify the name of the default.conf configuration file

This cancels the original default virtual host profile

Because the default default virtual host profile is default.conf

Resolver 119.29.29.29 in the configuration file

Means to configure an dns address

Because it is a forward proxy, after accepting the domain name requested by the private network

To send the request to the server you really want to access

However, the domain name sent by the private network does not contain an ip address.

So send the domain name to the dns server to resolve the ip address.

Get the ip address before forwarding it to the server you want to access.

So you need to configure a dns address here.

After accepting the intranet domain name, the domain name will be sent to this dns for resolution.

The following location can be set as shown in the figure.

After accepting the request from the private network machine to the proxy server

The domain name will be sent to the configured dns for resolution, and then the real server will be accessed

Then send the content returned by the real server to the intranet machine that made the request.

Nginx reverse proxy

Make an example of a reverse proxy

Set up a test virtual host profile as shown in the figure

Listen on port 8080 and the domain name is www.test.com

The root directory is / data/wwwroot/test.com

The first page file displayed by accessing the virtual host is index.html

As shown in the figure, create the root directory / data/wwwroot/test.com of the virtual host

Then use echo "test.com_8080" >! $/ index.html

Create a home file with the content of test.com_8080

This file is in the / data/wwwroot/test.com directory

As shown in the figure, create a new virtual host profile for the reverse proxy

Listen for port 80 and the domain name is www.test.com

The following location / is the configuration of the reverse proxy

When accessing the virtual host, the access request will be sent to 127.0.0.1 virtual 8080

As shown in the figure, use curl to access 127.0.0.1 virtual host

Test.com_8080 was returned successfully, which means that the virtual host can be accessed.

As shown in the figure, create a virtual host profile

Similar to the previous test virtual host

But the domain name is not set up for this virtual host.

The content returned by the location setting is a 8080 default string

Save exit and reload nginx

Also cancel the default server setting of the test virtual host

So now 127.0.0.1virtual 8080 corresponds to two virtual hosts

One is a test virtual host and the other is a 8080 default virtual host

The ip ports of these two virtual hosts are identical.

The difference between them is that test virtual hosts have domain names.

The 8080 default virtual host does not have a domain name.

Now 8080 default has been set as the default virtual host

So if you only access 127.0.0.1 VOL8080,

Access must be a 8080 default virtual host

If you want to access the test virtual host, you need to add the domain name of the test virtual host

To successfully access the test virtual host

As shown in the figure, you can see that the result returned by visiting curl 127.0.0.1 8080 / is 8080 default.

Use curl-x127.0.0.1 8080 www.test.com

With the domain name here, test.com_8080 is returned.

If you want to access the test virtual host, you need to bind the domain name to the ip port.

As shown in the figure, curl accesses the 127.0.0.1 80 domain name www.test.com

Test.com_8080 is returned, indicating that the reverse proxy is successful.

We visited port 80, but actually returned the contents of the virtual host on port 8080.

As shown in the figure, comment out everything below the proxy_pass line in the reverse proxy virtual host.

Save exit and reload nginx

As shown in the figure, use curl to access the 127.0.0.1 80 domain name www.test.com

The actual return is 8080 default.

What we want to visit is the test virtual host.

As shown in the figure, proxy_set_header Host $host

This line of code is the domain name that you specify to access.

127.0.0.1RV 8080 is set above.

The reverse proxy will point to this ip port.

If you do not set host, you will only access the virtual host of 127.0.0.1 virtual host 8080

If host is set, it points to 127.0.0.1 8080 that is bound to the specified host

The $host here is the system variable, and the actual value is the server_name of the current virtual host

That is, www.test.com, what server_name is, what the value of host is.

If host is set here, it is equivalent to curl-x127.0.0.1 www.test.com

If the host is not set here, then only 127.0.0.1 host 8080 will be accessed.

In this way, you can bind the domain name to the ip port

As shown in the figure, in addition to writing the ip port, proxy_pass can also directly write the domain name

It says www.123.com:8080/ here.

But if you write it this way, nginx doesn't know where the domain name points to.

So you also need to bind the corresponding ip in the system.

For example, in the / etc/hosts file, write the corresponding domain name and bind with ip

In this way, the domain name system of proxy_pass in nginx will resolve an Ip address.

Then access the ip port

The following proxy_header Host function is to set up a domain name

This domain name will be bound to the above ip port for access

If the ip port above says domain name instead of ip

It does not conflict with the domain name specified below, because the domain name written above is used to resolve ip

The domain name specified below will be bound to the ip port parsed above.

This example uses $host, which is the nginx global variable.

This variable actually corresponds to a value, which is the value of the current virtual host server_name

But generally speaking, it is more convenient to write the ip port directly.

Above is the designated ip port

The following specifies the host domain name bound to the ip port

Nginx reverse proxy 02

As shown in the figure, the proxy_pass instruction can be followed by url

There are three formats: transport protocol + domain name + uri (access path)

Transport Protocol + ip Port + uri

Transport protocol + socket

Here unix, http and https are all types of transport protocols.

Domain name + uri and ip port + uri and socket are all access paths.

Socket is generally a dedicated access port for a program.

To access a socket is to access a specific program, so you don't need to use a path

As shown in the picture, when writing proxy_pass, different writing methods have different results.

Such as location / aming/

If the access path contains / aming/, it will trigger

The proxy_pass here will be executed.

However, different writing of proxy_pass in location will lead to differences in the actual access paths.

Although proxy_pass is executed because the access path contains the / aming/ directory

But the actual access path does not necessarily contain / aming/

This example is to access the / aming/a.html file in the virtual host

Different paths will actually be accessed according to the different ways of writing proxy_pass.

If the ip port is not followed by any directory symbols

We will visit / aming/a.html, which is what we want.

If the ip port is followed by a root symbol /

Then you will directly access the a.html file in the root directory, which is obviously not correct.

The ip port is followed by / linux/, then the a.html file in / linux/ will be accessed.

If the ip port is followed by / linux and does not end with the directory symbol /

Will visit / linuxa.html

So if you want to access / aming/a.html correctly

There are two ways to write it, one is not to add any directory symbol / after the ip port.

The second is to write it completely as ip port / aming/

According to the above example, you can find that no matter what directory is behind the ip port

The actual access path will be changed directly to the name of the final file to be accessed, a.html

Add it directly to the directory behind the ip port

Therefore, if no directory symbols are written after the ip port, the system will add the directory path / aming/a.html itself.

Once any directory symbol exists, a.html is placed directly after the directory symbol

In the second case, ip port + / linux

The actual result is to visit / linuxa.html

This may be because linux is not followed by any directory symbols /

So the system thinks of linux as an unfinished file name.

Then paste the file name a.html with linux directly.

This changes to the file to be accessed in the form of / linuxa.html

So no matter what path you write, be sure to follow the directory symbol /

Reverse proxy 03

As shown in the figure, proxy_set_header sets the header information that can be received by the proxied server

For example, there are three computers, a b c.

An is the computer we use to access, and we make an access request from a

B is the reverse proxy server, and b receives the access request issued by us

C is the server that is reverse proxied, that is, the server we really want to access.

B will forward our access request to c

If proxy_set_header is not set, b will not bring the corresponding header information when forwarding the request to c

If this parameter is set, the request will be forwarded with the corresponding header information

Among them, $remote_addr and $proxy_add_x_forwarded_for are built-in variables of nginx.

In the $remote_addr variable is the ip address of the b reverse proxy server itself

In the $proxy_add_x_forwarded_for variable is the ip address of the a client computer

If this variable is not set, the c server actually does not know the real source address of the access request

When this variable is set, the c server can know which ip address the access request is from.

Edit the configuration file of the www.test.com virtual host as shown in the figure

Suppose this virtual host is the c server we want to access

Two echo are set in location to show the source address of the access request and the real source address.

$remote_addr records the address of the reverse proxy server

$proxy_add_x_forwarded_for records the real source address of the access request, that is, the address of the client

In this way, when you access the virtual host, the values stored in these two variables will be displayed.

Save exit, and then reload the configuration file

As shown in the figure, edit the configuration file of the reverse proxy server virtual host

As shown in the picture, you can see the inside of location

The lines proxy_set_header X-Real-IP and proxy_set_header X-Forwarded-For are commented out.

Do a test first and save the exit reload configuration file

As shown in the figure, using the curl test to issue an access request from 192.168.133.140VRO 80

192.168.133.140 this ip is actually the client ip.

Because the access request is sent from this ip.

However, you can see that after the test, two loopback addresses of 127.0.0.1 are actually displayed.

There is no such ip as 192.168.133.140.

In this test, both the reverse proxy server and the real server are on the local machine

Therefore, the source of the access request received by the real server c is the loopback address of the local machine.

The reverse proxy service b sends the request to the real server c with the internal loopback address of 127.0.0.1

Because these two servers are on the local machine, the communication between the programs on this machine basically goes to 127.0.0.1 loopback address.

So the value of $remote_addr of c is 127.0.0.1

Because reverse proxy server b does not set $proxy_add_x_forwarded_for

So the value of the $proxy_add_x_forwarded_for variable received by the real server c is the ip sent by the request.

That's 127.0.0.1.

The variable $proxy_add_x_forwarded_for actually records that starting from the client

A variable value of which ip addresses have been passed by the request, separated by commas between multiple ip addresses

If the access request sent does not have the variable $proxy_add_x_forwarded_for set

Then the value of this variable of the receiver is only the last ip sent by the access request, that is, the same as remote_addr.

For example, access requests from a to b to c

If b sets $proxy_add_x_forwarded_for

Then the format of this variable is a_ip, b_ip.

That is, the ip of an and the ip of b are recorded.

If more servers pass through the middle, their ip will also be recorded, separated by commas

Of course, every proxy server needs to set the variable $proxy_add_x_forwarded_for.

Otherwise, the $proxy_add_x_forwarded_for variable of the next proxy server will not be recorded to the previous ip.

Can only record the ip of the last server

So in this test, because b does not set $proxy_add_x_forwarded_for

So the value of the $proxy_add_x_forwarded_for variable of the c service is equal to the value of $remote_addr

As shown in the figure, for the second test, edit the configuration file of the reverse proxy server b

Remove the X-Real-IP and X-Forwarded-For comments from the location

Save exit reload profile

As shown in the picture, test again

You can see the result returned. The value of the first line remote_addr is 127.0.0.1.

This is the ip of proxy server b

The value of the second line $proxy_add_x_forwarded_for is two ip

In the curl command, the access request is issued from 192.168.133.140

In other words, the ip of client an is 192.168.133.140

The ip of b is 127.0.0.1

$proxy_add_x_forwarded_for records which ip went through the access request to c

The access request is from a to b and then from b to c

So the $proxy_add_x_forwarded_for variable records the ip of an and the ip of b

Because the access request went through these two ip addresses before reaching c

So when you do a reverse proxy in the future, all these lines of variables should be set.

Only the real server can get the real ip address of the access request.

Reverse agent 04

As shown in the figure, there are not many scenarios for redirect applications, and there are three main ways to write them.

The function is to modify the location and refresh header domain information returned by the proxied server

In the first way, redirect is the returned header information

Replacement is the information to be modified

Redirect will be changed to replacement

The second way of writing is default, which means the default setting.

The third off means to turn off the redirect function.

As shown in the figure, do a test and edit the configuration file of the proxy server

There are several conditions to be met for the test to be successful.

First of all, location can only be followed by the root directory / cannot be anything else.

The second condition is that the / symbol cannot be added after the url after proxy_pass.

Normally it is / ending, but you can't use / ending here.

Then the directory accessed must be real, and if it does not exist, you can create a

Then you can also create an index.html file in the directory and edit some string contents in it.

Save exit and reload the configuration file

As shown in the figure, edit the configuration file of the proxied server

Write it in this simple format as shown in the picture.

Save exit reload profile

As shown in the figure, when curl tests access, if the aming is followed by a / ending, then the index.html file will be accessed

But what we want to access is the directory itself, not a file in it.

Therefore, when crul, the address accessed cannot end with a / symbol.

So you can access the aming directory

As you can see, the returned code is 301 for permanent redirection

The following field after location is the access path with port 8080

As shown in the figure, edit the configuration file of the proxied server

Add access_log / tmp/456.log

In this way, the access log of the server is opened, and the access process can be understood more clearly by checking the access log

Save exit overload

As shown in the figure, curl the test again, this time the test aming ends with a / symbol

Cat View / tmp/456.log access Log

Found that the log information does not have information such as host and port.

In this case, you can modify the format configuration in the nginx.conf configuration file.

As shown in the figure, the three lines of log_format main in the configuration file were originally commented out

Now remove the comments and let these lines work. This is the format of the log return information.

As shown in the figure, add two nginx variables, $host $server_port, at the end.

Then save the exit and reload it, so that the information of these two variables will be added to the information displayed in the access log.

As shown in the figure, edit the proxy server configuration file, and also add the access_log configuration

The log address is / tmp/proxy.log

Add main after it because the format configured in nginx.conf is named after main.

The addition of main means to use the format named by main to display log information.

As shown in the picture, the access_log in the proxy server is also used

After that, you also need to add main to show log information in main format.

Save exit and reload

As shown in the picture, the curl test ends with the / symbol.

Looking at the log of the 456.log backend server, you can see that port 8080 is accessed.

Looking at the proxy.log proxy server log, you can see that port 80 is accessed.

The network code is 200, which is normal.

As shown in the picture, this visit to aming ends without a / symbol.

You can see that 301 is returned.

If you check the proxy.log, it also returns 301

As shown in the picture, retest it and view two more logs

See the log information from 301 to 200

All in all, it was confirmed that we accessed port 80 and jumped to port 8080.

But the client cannot access port 8080.

As shown in the figure, proxy_redirect can be used to solve this problem.

This is http://$host:8080/ /

If you write this way, you can remove the information of port 8080 that was originally returned.

Save exit overload

As shown in the picture, retest

As you can see, 301 is returned.

Then in the address behind the location, there is no information about port 8080.

Reverse proxy 05

Proxy_buffering means buffer.

Buffering is to draw an area in memory and write data in it.

When it is written to a certain amount, the data in the buffer will be written to the hard disk.

In this way, the reading and writing frequency of the hard disk can be greatly reduced.

If there is no buffering, the hard disk will be read and written every time the data is generated, which will be a great burden on the hard disk.

Suppose there are three objects, client a, proxy server b, proxy server c

A sends a request, b receives the request and forwards it to c

C returns the data to b, then b sends the data to a

This is normal operation, but if a makes a lot of access requests

Or there are many clients issuing access requests.

So for proxy server b and proxied server c

Every request has to be processed according to this process once, and the burden will be heavy.

Proxy_buffering is to set one or more buffers in the memory of proxy server b

When the buffer area is full, the data will be forwarded to the corresponding client.

In this way, the number of data forwarding of proxy server b is greatly reduced, and the burden is reduced.

When proxy_buffering is turned on, it is up to proxy_busy_buffer_size to decide when to send data to a

In the process, if the buffer area is full, there is a data overflow

The extra data will be written to temp_file, a temporary file, which will be stored on the hard drive.

If proxy_buffering is turned off, the data fed back by c will be forwarded directly from b to a

And nothing else will happen.

As shown in the figure, regardless of whether proxy_buffering is on or off

The proxy_buffer_size option is in effect. This parameter is used to set a buffer.

This buffer stores the header information fed back by the server

If the setting is not large enough to store header information, a 502 error code will appear.

Therefore, it is recommended to set it to 4k.

As shown in the figure, proxy_buffers defines the number of buffers per request and the specific size of each buffer

8 4k is defined here, which means that there are 8 buffers, each with a size of 4k.

Then the size of the total buffer is 84k.

Suppose there are 10, 000 requests, then the buffer is 8 * 10000 buffers

Because this setting is for each request, not a total of 8 buffers

Proxy_busy_buffer_size defines how much data is reached and then transfers the data to the client.

16k is defined here, so when b's buffer belonging to this request receives 16k of data

The data will be forwarded to a

There are eight buffers, with a total size of 32k. Generally speaking, the buffers are in two states.

One is to receive data, the other is to send data, and can not receive data and send data at the same time.

Proxy_busy_buffer_size defines the size of the buffer where the data is sent.

So the size of the proxy_busy_buffer_size should be smaller than the total size of the buffer

When the received data reaches the data amount set by proxy_busy_buffer_size

These buffers enter the state of sending data, and the remaining buffers are in the state of receiving data.

If the total amount of data requested for feedback is less than the value set by proxy_busy_buffer_size

Then b will be directly forwarded to a when it is received.

If the total amount of data requested for feedback is greater than the value set by proxy_busy_buffer_size

So when the amount of data received by the buffer reaches the value set by proxy_busy_buffer_size,

This part of the data will be sent to a first.

As shown in the figure, proxy_temp_path defines a temporary file storage directory

For example, a makes a request, and the total buffer size assigned to a by b proxy server is 32k

But the amount of data returned by the c service to this request is as large as 100 MB, which far exceeds the size of the buffer.

In this case, when b receives the data of c, there will be a lot of data overflow buffer.

The overflowed data will be saved to temporary files on b's hard disk first.

Proxy_temp_path defines the path where the temporary file is stored, as well as the subdirectory level

The path defined here is / usr/local/nginx/proxy_temp, which is a directory name.

Temporary files will be stored in this directory.

The following number 1 / 2 indicates the subdirectory level

The previous directory path is defined by ourselves, and the subdirectories are created automatically by the system.

How many subdirectory levels can be created by following the number

For example, writing only a 1 means that the subdirectory has only one layer, and the name of the subdirectory is 0-9.

By definition, proxy_temp_path supports three-level subdirectories, that is, you can write three numbers

For example, the number and naming method of writing 1 subdirectories is 0-9, a total of 10

If you write 2, there are 100 subdirectories, and if you write 3, there are 1000 subdirectories.

Subdirectory names are also named according to these numbers

If you write 1 / 3, it means that the subdirectories are divided into two layers, and the first layer is 0-9 10 subdirectories.

The second layer is 1000 subdirectories, which can also be written in reverse.

So the first layer is 1000 subdirectories, and the second layer below each directory has 10 subdirectories.

Proxy_max_temp_file_size defines the total size of temporary files

For example, if it is set to 100m here, it means that each temporary file is up to 100m.

If the data of the temporary file is transferred, it will be deleted automatically.

Proxy_temp_file_write_size defines the total amount of data written to temporary files at the same time.

Here define a value such as 8k or 16k

If the amount of data written at the same time is less than this value, then the amount of data written at the same time is increased.

If it is higher than this value, then reduce the amount of data written at the same time

Because the amount of data written at the same time is too high, the burden of IO on the hard disk is too large, and too small does not make full use of the performance of the hard disk.

So setting a value is neither too fast nor too slow, making full use of the performance of the hard drive without overburdening it.

In the figure, this is an example of using proxy_buffering

The first thing is to set the state to on, that is, to turn on the buffer function.

The size of the buffer area stored in the header file is 4k

Then there are 2 buffer regions of other data, each with a size of 4k

Then the data volume of busy_buffers is 4k.

When the amount of data received by buffer reaches 4k, it will send data.

Then there is the path definition of temporary file storage, which defines two layers of subdirectories.

They are 1 / 2, that is, there are 10 subdirectories on the first floor.

Then there are 00-99 100 subdirectories at the second level below each subdirectory.

Then the size of each temporary file is 20m.

Then the amount of data written by the temporary file at the same time is defined as 8k

Reverse proxy 06

As shown in the figure, to use proxy_cache, you must first turn on the proxy_buffering function.

Proxy_cache is the cache function.

Client a makes a request if the data requested by a has been saved to the cache of proxy server b

Then b will send the relevant data directly to an instead of requesting data from server c

If caching is not enabled, proxy server b will request data from server c for each request of a

If the data requested by an is the same twice, the data will be requested twice from server c

If the caching function is enabled, the data requested for the first time has been saved in the cache, and the second time if the same data is requested

B will get it directly from the cache instead of getting data from c, thus reducing the burden on server c

In summary, buffering can reduce the burden on proxy server b, and caching can reduce the burden on proxied server c

As shown in the picture, the proxy_cache function is turned on and off

Proxy_cache off means to turn off caching.

Proxy_cache zone is to open the cache, and zone is the name of the cache.

The cache name can be named arbitrarily, it can be zone or 123or any other name.

Writing a cache name here means opening a cache with that name.

Starting with version 0.7.66 of nginx, after opening proxy_cache

It also detects the Cache-Control and Expire headers in the http response header of the proxy server.

If the value of cache-control is no-cache, the requested data will not be cached

As shown in the picture, curl-I a website requests data

As you can see, the header file information returned is in the value after Cache-Control.

No-cache exists, indicating that the data returned by this request will not be cached

As shown in the figure, the parameter proxy_cache_bypass is set under certain circumstances

The requested data is not obtained from cache, but directly from the back-end server.

The string after this parameter is usually some variables of nginx.

For example, proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment

This setting means that if the value of any of these three variables is not zero or empty,

The response data is not obtained from the cache, but directly from the back-end server.

It is seldom used for the time being. Just learn about it.

As shown in the figure, proxy_no_cache is similar to the above parameter usage.

It is mainly set that under certain circumstances, the acquired data will not be cached.

Example proxy_no_cache $cookie_nocache $arg_nocache $arg_comment

This setting means that when the value of any of the next three variables is not zero or empty

The acquired data is not cached.

As shown in the figure, the format of this parameter is similar to that of the above parameters. Generally, it does not need to be set. Just keep the default.

As shown in the figure, proxy_cache_path is a parameter for setting the specific configuration of the cache.

In addition to the space in memory, the cache can also set aside a piece of space in the hard disk for caching.

Path is to specify a directory path as the cache path, where the cache will be stored.

Levels=1:2 represents the directory level, and the first number is set to the first layer.

The second number sets the second floor.

1 represents a total of 16 characters from 0 to 9 a murf. Each directory consists of a single character, with a total of 16 directories.

2 means 0-9 a murf has a total of 16 characters, but each directory is made up of two characters. There are more than 200 combinations, such as 00meme 01jue 04je 2f, etc.

In short, this parameter sets the subdirectory level, and the first number represents the first layer.

The second number represents the second floor.

Keys_zone sets the name and size of the memory zone

Keys_zone=my_zone:10m means that the name of zone is my_zone.

Then the size of zone is 10MB.

How long after inactive is set, delete the cache?

For example, if the figure is set to 300s, it means that if the data has not been accessed within 300 seconds.

Then the data will be deleted from the cache

Max_size sets the maximum amount of data that can be stored in the cache on the hard disk.

For example, here it is set to 5g, and the directory / data/nginx_cache/ set above

The directory on this hard disk can hold up to 5g of data, if it exceeds this amount.

The system will first delete the data with the least number of visits, and then put the new data in.

Proxy_cache_path cannot be written in server parentheses in the configuration file

To write in http parentheses

For example, edit the nginx.conf configuration file first

As shown in the figure, add proxy_cache_path code to the outside of server

As in the picture

Since the specified cache directory / data/nginx_cache/ does not exist, create a

As shown in the figure, compile the configuration file of a virtual host and add proxy_cache my_zone to the location

In this way, when the virtual host receives the request, it will use my_zone as the cache space.

The specific definition of my_zone cache space has been defined in the nginx.conf configuration file.

The configuration content in nginx.conf is valid for all virtual hosts.

So if my_zone is defined in nginx.conf,

Then use proxy_cache my_zone in all virtual host configuration files

These virtual hosts can all use the cache space of my_zone.

Then save the exit reload configuration file and take effect.

For normal use, you only need to add these two lines of code to successfully configure the cache.

As shown in the figure, another problem is that the permission of the nginx service itself is nobody.

The directory was created with root permissions

So here we need to change the group to which the owner of the cache directory belongs to nobody

In this way, the nginx service will not have permission problems when operating on this directory.

As shown in the figure, view the contents of the / data/nginx_cache/ directory

You can see the first-level directory of 0-9 a murf.

Go to the 0 directory and see the second level directory composed of two digits.

In summary, the main purpose of cache space configuration is to define proxy_cache_path

Can be defined in nignx.conf, so that any virtual host can use the

After defining the proxy_cache_path, in the virtual host server that needs to use caching

Configure proxy_cache zone_name

Zone_name is the cache space name defined in proxy_cache_path.

So that the corresponding virtual host can use this cache space.

The above is all the contents of the article "sample Analysis of nginx forward Agent and reverse Agent". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report