Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Analysis of micro-service architecture of Nginx

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

The main content of this article is "Nginx micro-service architecture analysis", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Now let the editor to take you to learn "Nginx micro-service architecture analysis" it!

Introduction

NGINX has been involved in the microservices movement from the very beginning. NGINX's lightweight, high performance and flexibility are ideal for micro services.

The NGINX Docker image is the number one application image on Docker Hub, and most of the microservice platforms you find on Web today include a demo that deploys NGINX in some form and connects to the welcome page.

Because we believe that the shift to microservices is critical to our customers' success, we at NGINX have launched a dedicated program to develop features and practices that support this earthquake shift in Web application development and delivery. We also recognize that there are many different ways to implement microservices, many of which are novel and specific to the needs of individual development teams. We believe that models are needed to make it easier for companies to develop and deliver their own micro-service-based applications.

With all this in mind, NGINX Professional Services is developing the NGINX Micro Services reference Architecture (MRA), a set of models that can be used to create their own microservice applications.

MRA consists of two parts: a detailed description of each of the three models, and the downloadable code that implements our sample photo sharing program, Ingenious. The only difference between the three models is the configuration code used to configure NGINX Plus for each model. This series of blog posts will provide an overview of each model; a detailed description of the Ingenious sample program, configuration code, and code will be released later this year.

We have three goals for building this reference architecture:

Provide ready-to-use blueprints for customers and industries to build micro-service-based systems, accelerate and improve development

Create a platform for testing new features in NGINX and NGINX Plus, whether internal or external, distributed in the product core or as a dynamic module

To help us understand partner systems and components, we can understand the microservices ecosystem as a whole

The micro-service reference architecture is also an important part of NGINX customer professional service products. In MRA, we use NGINX open source and NGINX Plus features as much as possible, and NGINX Plus-specific features when needed. NGINX Plus dependencies are stronger in more complex models, as described below. We expect that many users of MRA will benefit from access to NGINX professional services and technical support for NGINX Plus subscriptions.

Overview of microservice reference architecture

We are building a reference architecture to comply with the principles of Twelve-Factor App. These services are designed to be lightweight, transient, and stateless.

MRA uses industry-standard components such as Docker containers, various languages-Java,PHP,Python,NodeJS / JavaScript and Ruby-and NGINX-based networks.

When migrating to microservices, one of the biggest changes in application design and architecture is the use of the network to communicate between the functional components of the application. In a monolithic application, application components communicate in memory. In micro-service applications, this communication takes place over the network, so network design and implementation become critical.

To reflect this, MRA has been implemented using three different network models, all of which use NGINX or NGINX Plus. They range from relatively simple to feature-rich and more complex:

Agent Model (Proxy Model)-A simple network model suitable for implementing NGINX Plus as a controller or API gateway for micro-service applications. The model is based on Docker Cloud.

Router Grid Model (Router Mesh Model)-A more powerful network approach where there is a load balancer on each host that manages connections between systems. This model is similar to the architecture of Deis 1.0.

Fabric model (Fabric Model)-the crown pearl of MRA, and the fabric model has NGINX Plus in each container, handling all entrance and exit traffic. It is suitable for high-load systems and supports all levels of SSL / TLS,NGINX Plus to provide reduced latency, persistent SSL / TLS connections, service discovery and circuit breaker modes in all microservices.

Our goal is for you to use these models as a starting point for your own microservice implementation, and we welcome feedback on how to improve MRA. (you can start by adding comments to the following.)

The following is a brief description of each model; we recommend that you read all the descriptions to begin to understand how best to use one or more models. Future blog posts will describe each model in detail, one for each blog post.

Brief introduction of Agent Model

Agent model is a relatively simple network model. It is an excellent starting point for initial microservice applications, or a target model for transforming moderately complex monolithic legacy applications.

In the proxy model, NGINX or NGINX Plus acts as the ingress controller, routing requests to the microservice. When creating a new service, NGINX Plus can use dynamic DNS for service discovery. When using NGINX as the API gateway, the proxy model is also suitable for use as a template.

If inter-service communication is required-and most applications are at any level of complexity-the service registry provides mechanisms within the cluster. (for a detailed list of communication mechanisms between services, see this blog article.) Docker Cloud uses this method by default; in order to connect to another service, the service queries the DNS and obtains the IP address to send the request.

In general, the agent model is suitable for simple to moderately complex applications. It is not the most effective method / model for load balancing, especially on scale; if you have serious load balancing requirements, use one of the models described below. ("Scale" can refer to a large number of microservices and high traffic.)

Editor-for an in-depth exploration of this model, see MRA, part 2-proxy models.

Router grid model

The router grid model is moderately complex and is well suited for powerful new application designs, as well as for transforming more complex monolithic legacy applications that do not require Fabric model capabilities.

By running load balancers on each host and actively managing connections between microservices, the router mesh model uses a more powerful network approach than the proxy model. The main advantage of the router grid model is more efficient and robust load balancing between services. If you use NGINX Plus, you can implement active health checks to monitor individual service instances and gracefully limit traffic on shutdown.

Deis Workflow uses a method similar to the router grid model to route traffic between services, and NGINX instances run in containers on each host. When a new application instance is started, the process extracts the service information from the etcd service registry and loads it into NGINX. NGINX Plus can also work in this mode, using various locations and their associated upstream.

Finally-Fabric model with optional SSL / TLS

We NGINX are most excited about the Fabric model. It brings some of the most exciting micro service promises, including high performance, flexibility in load balancing, and ubiquitous SSL / TLS, up to the level of a single micro service. The Fabric model is suitable for secure applications and can be extended to very large applications.

In the Fabric model, NGINX Plus is deployed in each container and acts as a proxy for all HTTP traffic to and from the container. The application communicates with the local (localhost) host location to obtain all service connections and relies on NGINX Plus for service discovery, load balancing, and health checks.

In our configuration, NGINX Plus queries ZooKeeper for all service instances to which the application needs to connect. For example, with the DNS frequency setting (valid) set to 1 second, NGINX Plus scans the ZooKeeper every other second and routes traffic appropriately.

Because of the powerful HTTP processing capabilities in NGINX Plus, we can use keepalive to maintain stateful connectivity to microservices, reduce latency and improve performance. This is a particularly valuable feature when using SSL / TLS to protect traffic between microservices.

Finally, we use NGINX Plus's active health check to manage the traffic of health instances, and basically build the circuit breaker model for free.

Ingenious demonstration application of MRA

MRA includes a sample application as a demonstration: the Ingenious photo sharing application. Ingenious is implemented in three models-proxy, router grid and architecture. The Ingenious demo application will be released to the public later this year.

Ingenious is a simplified version of the photo storage and sharing application, la Flickr or Shutterfly. We chose the photo sharing app for the following reasons:

It is easy for users and developers to master its functions.

Multiple data dimensions need to be managed.

It is easy to incorporate beautiful design into the application.

It provides asymmetric computing requirements-a mix of high-intensity and low-intensity processing-that enables real-world testing of failover, extension, and monitoring functions across different functions.

At this point, I believe you have a deeper understanding of "Nginx's micro-service architecture analysis". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report