In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Today, I will talk to you about the reasons for the use of reverse proxy in Node.js. Many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.
What is a reverse proxy?
A reverse reverse proxy is basically a special web server that accepts requests and forwards them to another HTTP server somewhere, and also accepts the response and forwards it back to the original requestor.
Reverse proxies usually do not send the original request precisely, and typically modify the request in some ways. For example, if the reverse proxy is located in www.example.org:80 and forwards the request to ex.example.org:8080, it probably rewrites the original Host header to match the destination address. Reverse proxies may also modify requests in other ways, such as cleaning up a defective request or converting between protocols.
Once the reverse proxy receives a response, it may also be converted in some way. A common approach is also to modify the Host header to match the original request.
The requested body can also be modified. A common modification is to perform gzip compression in response. Another common change is to enable HTTPS support when the underlying service is just HTTP.
Reverse proxies can also distribute incoming requests to multiple backend instances. If a service is exposed to api.example.org, a reverse proxy can forward requests to api1.internal.example.org, api2.internal.example.org, and so on.
There are many different reverse proxies. The two more popular ones are Nginx and HAProxy. Both of these tools can perform gzip compression and add HTTPS support, and they are also well suited to other areas. Nginx is more popular and has other useful capabilities, such as running a static file server from the file system, so we will use it as an example in this article.
Now that we know what a reverse proxy is, let's see why we use it for Node.js.
Why should I use a reverse proxy?
SSL terminal
SSL terminals are one of the main reasons for using reverse proxies. Changing the protocol of an application from http to https is more than adding an s letter. Node.js itself can perform the necessary encryption and decryption of https, and can also be configured to read the necessary certificate files.
However, configuring a protocol to communicate with our application and managing an SSL credential that never expires is not really something our application needs to focus on. Checking certificates in a code base is not only a hassle, but also a security risk. Obtaining a certificate from a specific location when the application is started also has its risks.
In view of this, it is best to execute the SSL terminal outside the application, usually by a reverse proxy. Thanks to technologies like certbot, maintaining certificates through Nginx is as simple as setting up a scheduled task. Such a task automatically installs the new certificate and dynamically reconfigures the Nginx process. This is a much less destructive process than restarting each Node.js application instance.
At the same time, by allowing a reverse proxy to execute the SSL terminal, this means that only code written by the author of the reverse proxy can access your private SSL certificate. On the other hand, if your Node.js application handles SSL, every separate third-party module used by your application-even potentially malicious modules-can access your private SSL certificate.
Gzip compression
Gzip is another feature that you should transfer from the application to the reverse proxy. Gzip compression policies are best set at the administrative level, rather than having to be specified and configured for each application.
It's best to use some logic to decide what to use gzip for. For example, a very small file, perhaps smaller than 1kb, may not be worth compressing, because its gzip compressed version may become larger, or the CPU cost of decompressing it by the client may be less cost-effective. At the same time, when dealing with binary data, depending on its format, it may not benefit from compression. Sometimes gzip cannot be simply enabled or disabled, and the received Accept-Encoding header needs to be checked in order to be compatible with the compression algorithm.
Clustering
JavaScript is a single-threaded language, and Node.js is, in turn, a single-threaded server platform (although the experimental worker threading support that began to emerge in Node.js V10 works to change this). This means that to get as much throughput as possible from a Node.js application, you need to run about the same number of instances as CPU cores.
Node.js 's own cluster module can be clustered. Received HTTP requests will be handed over to a master process and then distributed to cluster workers.
However, dynamically scaling cluster workers can be a bit of a struggle. It is also common to increase throughput by running an additional Node.js process as the distribution master process. However, the cross-machine scaling process is still a bit difficult for cluster.
For these reasons, it is sometimes better to use a reverse proxy to distribute running Node.js processes. Reverse proxies can be dynamically configured to point to emerging application processes. To be honest, an application should only focus on its own work, rather than managing multiple copies and distributing requests.
Enterprise routing
When working on large web applications, especially those created by enterprises with multiple teams, it is useful to use a reverse proxy to decide how to forward requests. For example, requests to example.org/search/* can be routed to internal search applications, while other requests to example.org/profile/* can be distributed to internal data applications.
Such processing provides other powerful features, such as sticky sessions, blue / green deployments, Aamp B testing, and so on. Personally, I have worked in such a code base where the application executes this logic, and this implementation makes the application extremely difficult to maintain.
Performance benefit
Node.js is highly malleable. It can set up static resource services from the file system, perform gzip compression on HTTP responses, build-in support for HTTPS, and many other features. It even has the ability to run multiple instances of an application and distribute its own requests through the cluster module.
Basically, however, it is in our interest to have a reverse proxy handle these operations, rather than relying on the Node.js application. In addition to the reasons listed above, another reason you want to do this outside of Node.js is efficiency.
SSL encryption and gzip compression are two highly compute-intensive operations. Specialized reverse proxy tools, such as Nginx and HAProxy, specialize in these operations and perform faster than Node.js. Using web server like Nginx to read static content from disk is also faster than Node.js. Sometimes even using additional Node.js processes to perform clustering is more efficient with Nginx reverse proxies, with less memory and CPU footprint.
However, hearing is false. Let's run some benchmarks!
The following load tests are performed using siege (an http/https regression and benchmarking tool). We run the command (simulating the user's request) with a concurrent value of 10, and the command is iterated 20,000 times (there will be a total of 200,000 requests).
To verify memory usage, we ran the command pmap | grep total several times during the benchmark and averaged it as a result. The pmap command in Linux is used to see how much memory the process uses. When you run Nginx with a single worker thread, you end up running two instances, one main thread and the other worker thread, and then add the two values. When running a Node.js cluster with two nodes, there will be three processes, one is the main process, and the other two are worker processes. The approx memory column in the following table is the sum of each Nginx and Node.js process in a given test.
Results of the benchmark
We used two worker in the node-cluster benchmark, which means that there are three Node.js processes running: one master and two worker. We have two Node.js processes running in the nginx-cluster-node benchmark. Each Nginx test has a separate Nginx main process and a separate Nginx worker process. The benchmark involves reading a file from disk, and both Nginx and Node.js are configured to cache the file in memory.
The use of Nginx results in a throughput increase of about 16% (746rps to 865rps) for Node.js execution of SSL terminals. Using Nginx for gzip compression increases throughput by about 50% (5047rps to 7590rps). Using Nginx to manage a process cluster results in a performance loss of about 1% (8006rps to 7908rps), probably due to the overhead of passing additional requests between loopback network devices.
A basic Node.js single process single memory usage is about 600MB, while the Nginx process is about 50MB. These usage fluctuates slightly depending on which features are used, for example, Node.js uses additional about 13MB to execute SSL terminals, and Nginx uses additional 4MB to act as a reverse proxy for Node.js and to set up services that read static content from the file system. One interesting thing to note is that Nginx maintains a stable memory throughput throughout its life cycle; however, Node.js 's memory throughput often fluctuates due to JavaScript's natural garbage collection mechanism.
The following is the version of the software that performs this benchmark:
Nginx: 1.14.2
Node.js: 10.15.3
Siege: 3.0.8
The test was performed on a machine with 16GB memory, with an i7-7500U CPU 4x2.70GHz kernel 4.19.10. The project address is located at: github.com/IntrinsicLa … .
Simplify the application code
Benchmarking is comprehensive and successful, but from my point of view, the biggest benefit of transferring some of the work from Node.js applications to reverse proxies is code simplification. We were able to reduce a lot of the necessary application code that could create a potential bug and replace it with a declarative configuration. A common view among developers is that they have more confidence in letting external teams such as Nginx write this part of the code, rather than writing it themselves.
Instead of installing and managing gzip compression middleware and keeping it up to date in multiple Node.js projects, we can configure it uniformly in one place. Unlike restarting the application process after loading the SSL certificate, we can use the existing certificate management tools. Instead of adding conditional statements to the application to check whether the process is master or worker, it is better to leave it to other tools to judge.
Reverse proxies allow our applications to focus on business logic and ignore protocol and process management.
Although Node.js has the perfect ability to run in a production environment, it brings benefits from the combination of reverse proxy and HTTP Node.js applications in production environment. Operations like SSL and gzip become faster. It is also easier to manage SSL certificates. A large number of required application code has also been reduced.
After reading the above, do you have any further understanding of the reasons for using reverse proxies in Node.js? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.