In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
"Service Mesh" is still a new concept to most people, so talking about its "history" may seem a bit funny. But in fact, as early as early 2010, the concept of a service grid was vaguely beginning to take shape in some large network-scale companies. So service grids do have a history worth exploring and understanding.
Before we dive into the historical context, let's talk about "now." What is a service grid? Why has it suddenly become a hot topic in the cloud-native realm?
A service grid is a software infrastructure layer used to control and monitor internal service-to-service traffic in microservices applications. It usually takes the form of a "data plane" of network agents deployed with application code, and a "control plane" for interacting with these agents. In this model, developers ("service owners") are blissfully unaware of the service grid, while operators ("platform engineers") are given a new set of tools to ensure reliability, security, and visibility.
Why is the service grid suddenly popular? In short, because for many companies, tools like Docker and Kubernetes have "solved the deployment problem"-or at least pretty much solved it. But Docker and Kubernetes failed to solve runtime problems. And that's where the service grid comes in.
What does "solve deployment problems" mean? Using technologies like Docker and Kubernetes can significantly reduce the operational burden of deploying large numbers of applications or services for enterprises. With Docker and Kubernetes, deploying 100 applications or services is no longer 100 times the workload of deploying a single application. This is a historic step forward, and for many companies, it can significantly reduce the cost of adopting microservices. This is probably not just because Docker and Kubernetes provide strong abstraction at all the right levels, but because they standardize packaging and deployment patterns across organizations.
But what happens once the application runs? After all, deployment is not the last step in production; after deployment, the application must still be running. The question then becomes: Can we standardize the runtime operations of our applications in the same way that we standardized deployments with Docker and Kubernete?
To solve this problem, service grid was born. Essentially, service networks provide a unified, global way to control and measure all request traffic ("east-west" traffic, in data center parlance) between applications or services. For companies adopting microservices, this request traffic plays a critical role in runtime behavior. Because services work by responding to incoming requests and making outgoing requests, request flow becomes a key determinant of how an application behaves at runtime. Therefore, standardizing traffic management becomes a tool for standardizing application runtime.
By providing APIs to analyze and manipulate this traffic, the service network provides a standardized mechanism for runtime operations across the organization--including ways to ensure reliability, security, and visibility. As with any good infrastructure layer, the way a service grid (ideally) works is independent of how services are constructed.
How is the service grid formed?
So where did the service grid come from? After doing some software archaeology, we found that the core functionality provided by the service grid-request-level Load Balancer, disconnect, retry, instrumentation, etc. -is basically not new. In fact, the service grid is ultimately a repackaging of functions, and it is the "place" rather than the "what" that really changes.
The origins of the service grid began around 2010 with the rise of the three-tier application architecture model--a simple architecture that once powered the vast majority of applications on the network. In this model, request traffic plays an important role (two hops!). Application traffic is handled first by the Web Tier, which in turn talks to the App Tier, which in turn talks to the Database Tier. Web servers in the Web tier are designed to handle a large number of incoming requests that need to be carefully handed off to relatively slow application servers very quickly. (Apache, NGINX, and other popular Web servers all have very sophisticated logic to handle this situation.) Similarly, the application layer uses database libraries to communicate with the backing store. These libraries are typically responsible for handling caching, Load Balancer, routing, flow control in a manner optimized for this use case.
However, this three-tier application architecture model can start to break down under overload--especially at the app tier, which can become very overloaded over time. Big companies like Google, Facebook, Netflix, and Twitter learned to break up monolithic architectures into many independently operating blocks, which led to the rise of microservices. At the moment microservices were introduced, east-west traffic was also introduced. In this world, communication is no longer exclusive, but between each service. If communication goes wrong, the entire site will fail.
So these companies all responded in a similar way: they wrote "fat client" libraries to handle request traffic. These libraries-Google's Stubby, Netflix's Hysterix, Twitter's Finagle, for example-provide a uniform runtime for all services. Developers or service owners use these libraries to make requests to other services, and under this framework, these libraries are responsible for Load Balancer, Routing, Breaks, Telemetry. By providing uniform behavior, visibility, and control points across each service in the application, these libraries form the initial grid of services on the surface--no fancy names.
The Rise of Proxy
Fast forward to the modern cloud-native world. Of course, these libraries still exist. However, given the ease of operation offered by out-of-process agents, libraries become significantly less attractive--especially when the advent of containers and orchestration frameworks reduces deployment complexity significantly.
The proxy avoids many of the disadvantages of libraries. For example, when a library changes, those changes must be deployed in each service, a process that often requires complex organization-level coordination. In contrast, agents can be upgraded without recompiling and redeploying each application. Similarly, proxies support multilingual systems in which applications composed of services are written in different languages--an approach that is too expensive for libraries.
Perhaps most importantly, for large organizations, implementing a service network in a proxy rather than a repository shifts responsibility for providing the necessary functionality from the service owner to the end user (i.e., platform engineering team) who ultimately consumes the service. This alignment of service providers and consumers will allow these teams to control their own destiny, eliminating complex dependencies between development and operations.
These factors all contributed to the rise of the service grid. By deploying a distributed "grid" of agents that can be maintained as part of the underlying infrastructure rather than the application itself, and by providing a centralized API to analyze and manipulate this traffic, the service grid provides a standardized mechanism for runtime operations across the entire organization, including methods to ensure reliability, security, and visibility.
Enterprise Service Grid Applications
Service Mesh greatly simplifies the user experience and takes Kubernetes to the next level for large and medium-sized enterprises, widely recognized by the industry as the best technical design for a new generation of microservices architecture. Recently, the application and exploration of Service Mesh technology in domestic and foreign enterprises and technology fields are in full swing. For most enterprises using containers, Service Mesh seems to be the last piece of jigsaw puzzle to be completed in container deployment.
KubeCon + CloudNativeCon, hosted by CNCF, as the world's top technology event in Kubernetes and container technology, will be held in Shanghai on November 14 - 15 this year, which is KubeCon's first time in China. Rancher Labs and Huawei will co-host KubeCon with CNCF on November 13 to launch the 2018 Cloud Native Service Grid (Istio) Enterprise Summit with many enterprise customers in China.
At that time, technology leaders and microservice architects from Huawei, SAIC Group, Rancher Labs, Yunhong, Industrial Bank, Feiloan Finance, Goldwind Technology, Shuwei Information and other famous enterprises will share their experiences and experiences in the construction of a new generation of microservice architecture and the application of Service Mesh.
Date: Tuesday, November 13
Time: 9:00 a.m. to 6:00 p.m.
Venue: Changfeng Grand Ballroom, JW Marriott Hotel Shanghai New Development Asia Pacific
Application: t.cn/RFG85AW
We invite you to come to the summit to discuss the implementation of new technologies such as containers, microservices and Kubernetes.
English link:
https://thenewstack.io/history-service-mesh/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.