Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How did the Service Mesh mode come from?

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how the Service Mesh mode comes from." Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let Xiaobian take you to learn "How does Service Mesh mode come?"

Distributed systems help us solve many use cases that we couldn't even think about before, but they also bring new problems.

When systems are smaller and simpler in architecture, developers reduce the extra complexity by reducing the number of remote interactions. The safest way to handle distribution like this is to avoid it as much as possible, even if it means generating duplicate logic and data across systems.

But the reality is that from a few large central computers at the beginning to hundreds of small services today, the anti-war needs of the industry require us to make breakthroughs. We need to get out of the woods and tackle emerging new challenges and outstanding issues, first with ad hoc solutions on a case-by-case basis and then with more complex approaches. But as we continue to solve problems and design better solutions, patterns, libraries, and platforms that address the most common requirements emerge.

networking of computers

At first, people wanted interaction between two or more computers:

Completing a service dialogue for an end user. This is obviously an oversimplified view because many layers of conversion between the bytes that code operates on and the electrical signals sent and received over wires are missing. But abstract concepts are sufficient for our discussion. Let's add more detail by showing the network stack as a different component:

By having one service talk to another to achieve some end user goal, we add the network stack here:

These models have been used repeatedly since the 1950s. Initially, computers were rare and expensive, so every connection between two nodes was carefully designed and maintained. However, as computers become cheaper and more popular, the number of connections and the amount of data increases dramatically. As people become more and more networked, developers must ensure that the software they build meets the quality of service requirements of users.

To get there, you need to solve a lot of problems, such as getting machines to find each other, handling multiple concurrent connections on a line, allowing machines to communicate with each other without direct connections, routing packets between networks, encrypting communications, and so on.

Take flow control for example. Flow control itself is a mechanism that prevents a server from sending more packets than downstream servers can handle. In networked systems, we have at least two independent computers that do not "know" much about each other, so flow control is necessary. Computer A sends bytes to computer B at a given rate, and there is no guarantee that B will process the received bytes fast enough and consistently. For example, computer B may be busy running other tasks in parallel, or packets may arrive out of order while computer B is blocked waiting for the packet that should arrive first. In other words, computer A not only doesn't have the performance expected by computer B, but it's likely to make things worse, possibly overloading computer B, which has to queue up for all incoming packets to process.

There was a time when people who built web services and applications were expected to be able to handle the challenges presented in their code. In our flow control example, this means that the application itself must contain the logic needed to ensure that we are not overloading the service with packets. This heavy use of network logic parallels business logic. The abstract diagram looks like this:

Fortunately, with the rapid development of technology, a number of standards (such as TCP/IP) have emerged that integrate solutions to flow control or other problems into the network stack-code that still exists, but has moved from the application to the underlying network layer provided by the operating system.

But given high performance and reliability, few organizations can say they are using only the TCP/IP stack that comes with commercial operating systems to drive business.

Emergence of microservices architecture

By now, computers are everywhere, and these networking stacks have proven themselves to be "de facto" toolsets for reliably connecting systems. With more nodes and stable connectivity, the industry began using various types of network systems, from fine-grained distributed agents and objects to service-oriented architectures made up of larger but still heavily distributed components.

This extreme distribution brings many interesting advanced use cases and benefits, but it also presents some challenges. Some challenges are brand new, while others are just advanced versions of what we discussed when we talked about the original network.

In the 1990s, Peter Deutsch and his colleagues at Sun Microsystems compiled "The 8 Fallacies of Distributed Computing," a list of assumptions people tend to make when working with distributed systems. Peter's point is that these may be true in more primitive network architectures or theoretical models, but they are not true in the modern world:

The Network is Reliable

Latency is Zero

Bandwidth is Infinite

The Network is Secure

topology does not change

There is one administrator.

Transport Cost is Zero

The Network is Homogeneous

Dismissing the list as a "fallacy" means that developers cannot ignore these problems and must address them explicitly.

To complicate matters, the move to more distributed systems--what we often call microservices architectures--introduces new requirements in terms of operability. Here are some of the issues we have to deal with:

Rapid provisioning of compute resources

Basic Monitoring (Basic Monitoring)

Rapid deployment

Easy to provide storage

Easy access to the edge

Authentication/Authorisation

Standardised RPC

So while TCP/IP stacks and generic network models developed decades ago remain powerful tools for enabling computers to communicate with each other, more complex architectures introduce another layer of requirements that developers working in these architectures must meet again.

For example, consider service discovery and circuit breakers, two technologies used to solve some of the resilient and distributed challenges listed above.

History tends to repeat itself, and the first organizations to build systems based on microservices followed policies very similar to those of previous generations of networked computers, meaning that the onus of handling these requirements fell on the developers who wrote the services.

Service discovery is the process of automatically finding which service instances satisfy a given query, for example, a service named Teams needs to find service instances named Players and set the attribute environment to production. We will invoke some service discovery procedure that will return a list of suitable servers. For most unified architectures, this is a simple task, typically using DNS, load balancers, and some port number convention (e.g. all services bind their HTTP servers to port 8080). In a distributed environment, tasks begin to become more complex, with services that previously trusted DNS lookup dependencies now having to deal with issues such as client load balancing, multiple disparate environments, geographically dispersed servers, and so on. Previously we needed one line of code to resolve hostnames, but now our service requires many lines of boilerplate files to handle the various situations introduced by later versions.

The circuit breaker is a model proposed by Michael Nygaard and summarized by Martin Fowler as:

The logic behind a circuit breaker is simple, wrapping a protected function call to monitor for faults in a circuit breaker object. Once the fault reaches a certain threshold, the circuit breaker trips and all further calls to the circuit breaker return an error and no protection calls are made. Usually, we also need some detection alarms for circuit breakers.

Such simple devices can add more reliability to interactions between services. But again, as the level of distribution increases, they tend to become very complex, and the likelihood of system problems increases, even if it's something as simple as "a circuit breaker trips and some kind of monitoring alarm". What used to take just a few lines of code now requires a lot of boilerplate to process.

But frankly, the two examples listed above are hard to implement correctly, which is why complex large libraries like Twitter's Finagle and Facebook's Proxygen have become popular to avoid rewriting the same logic in every service.

The model described above has been adopted by most of the pioneering microservices architecture companies, such as Netflix, Twitter, and SoundCloud, and as the number of services in the system has increased, they have stumbled upon various flaws in this approach.

Even with a library like Finagle, companies still need to invest time from their engineering teams to build the "glue" that connects the library to other ecosystems. In SoundCloud's and DigitalOcean's experience, in an organization of 100-250 engineers, following this strategy requires 1/10 of the employees to be dedicated to building tools. This cost is obvious when developers are assigned to teams dedicated to building tools, but more often, there are hidden costs and time costs.

The second problem is that the above approach limits the tools, runtimes, and languages microservices can use. Libraries of microservices are usually written for a specific platform, whether it's a programming language or a runtime like the JVM. If an enterprise uses a platform other than one supported by the library, it usually needs to port code to the new platform itself. Developers have to build tools and infrastructure again instead of focusing on core business and products. That's why mid-sized organizations like SoundCloud and DigitalOcean decided to support only one platform for internal services-scala and Go.

The last issue worth discussing is governance. The library model abstracts the features needed to handle microservice architecture requirements, but it is still a component that needs to be maintained. Ensuring that thousands of service instances use the same or at least compatible library versions is no easy task, and every update means integrating, testing, and redeploying all services--even if nothing has changed to the services themselves.

next step

Similar to what we see in the network stack, it is desirable to extract the features required for large-scale distributed services to the underlying platform.

People write very complex applications and services using higher-level protocols such as HTTP, without even considering how TCP controls packets over the network. This is exactly what microservices need, where developers working on services can focus on their business logic and avoid wasting time writing their own service infrastructure code or managing libraries and frameworks for the entire team.

Combining this idea with our diagram, we can conclude as follows:

Unfortunately, changing the network stack to add this layer is not a feasible task. The solution many practitioners find is to implement it as a set of agents. The idea here is that the service won't connect directly to its downstream dependencies, but rather all traffic will go through a small piece of software to transparently add the desired features.

The first documented development of this idea took advantage of the concept of "sidecars." A sidecar is a helper process that executes alongside an application and provides additional features. In 2013, Airbnb wrote about Synapse and Nerve, an open-source implementation of sidecar. A year later, Netflix introduced Prana, a sidecar dedicated to allowing non-JVM apps to benefit from their NetflixOSS ecosystem. SoundCloud has also built sidecars to enable Ruby legacy to use infrastructure built for JVM microservices.

While there are several such open source proxy implementations, they are often designed to work with specific infrastructure components. For example, when it comes to discovering Airbnb's nerves and synapses, we assume that the service is registered in Zookeeper, while for Prana we should register using Netflix's own Eureka service.

As microservice architectures become more popular, a new breed of agents has emerged that are flexible enough to accommodate different infrastructure components and preferences. The first widely known system was Linkerd, created based on their work on the Twitter microservices platform. Lyft's engineering team soon announced Envoy, a project that followed similar principles.

Service Mesh

In this model, each of our services will have an affiliated proxy sidecar. Considering that services communicate only through sidecar proxies, our final deployment looks like the following diagram:

Buoyant CEO William Morgan observed that the interconnections between agents formed a mesh network, so in early 2017, he wrote a definition for this platform and called it Service Mesh:

Service Mesh is the infrastructure layer that handles inter-service communication and is responsible for reliable delivery of requests. In practice, Service Mesh is typically implemented as a lightweight network proxy, deployed with the application but transparent to the application.

What makes the most sense about this definition is that it no longer treats agents as separate components, but rather recognizes that the networks they form are valuable in themselves.

As enterprises move their microservices deployments to more complex runtimes such as Kubernetes and Mesos, people and enterprises have begun to use the tools provided by these platforms to properly implement the idea of grid networking. They are moving from a set of independent agents to an appropriate, somewhat centralized control plane.

In the aerial view, we can see that the actual service traffic still flows directly from the agent to the agent, but the control plane knows about each agent instance. The control plane enables agents to implement functions such as access control and metric collection:

It is too early to fully understand the impact of Service Mesh in large systems. Two benefits of this approach seem obvious to me. First, there is no need to write custom software to handle the final code of microservices architecture, which will allow many smaller organizations to enjoy features that were previously only available to large enterprises, creating a variety of interesting use cases. Second, this architecture may allow us to finally realize our dream of using the best tools/languages to get the job done without worrying about the availability of libraries and patterns for each platform.

At this point, I believe that everyone has a deeper understanding of "how the Service Mesh mode comes from," so let's actually operate it! Here is the website, more related content can enter the relevant channels for inquiry, pay attention to us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report