In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Preface
With the development of computer software technology, the evolution of software architecture is all in the direction of enabling developers to build large and complex applications more easily and quickly. Container technology was originally produced to solve the problem of inconsistency in the running environment. With the continuous development, there are more and more new directions around container technology.
In recent years, many new software architecture patterns have emerged in the field of cloud computing, including some popular concepts such as cloud nouns, functional computing, Serverless, ServiceMesh and so on. This article will take a peek at the veil of ServiceMesh. The following combined with their own understanding as far as possible in popular words to describe.
Background and definition of micro-services and service governance
In the software development before micro-service, all modules are often included in one application, and compiled, packaged, deployed, and operated together. There are many problems in this approach, because a single application contains too many things, one of the modules has a problem or needs to be updated, then the entire application needs to be redeployed. This way brings a lot of trouble to the development and operation and maintenance. With the gradual complexity of the application, there will be more and more things involved in a single application. Slowly, we find many shortcomings and begin to divide the services. A large application is divided according to different dimensions to produce many small applications, and a call relationship is formed between each application. Each small application is responsible for different development, deployment and operation and maintenance, so micro services appear.
Because the applications in micro-services are deployed on different machines, the services need to communicate and coordinate with each other, which is more troublesome than a single application. Calls between different methods within the same application are addressable and fast because they are linked in the same memory when the code is compiled and packaged. The invocation between different services under microservices involves the communication between different processes or machines, which generally needs to be mediated and coordinated by third-party middleware. Because of all these, there are many frameworks for microservice-oriented middleware, including service governance. Through the service governance tool, all the applications connected to it can be managed, which makes the communication and coordination between services more simple and efficient.
Containers and container arrangement
Initially, container technology is to solve the problem of inconsistent application running environment, to avoid running in the local environment or test environment, and so on, there are problems in the production environment. Package the program and its dependencies into an image through the container, and start several containers on any machine on which the container software is installed and running. Each container is an instance of the application runtime. These instances generally have exactly the same running environment and parameters. So that no matter on any machine application can show the same effect. This brings great convenience to development, testing, operation and maintenance, and it is no longer necessary to build the same running environment on different machines. And the image can be Push to the image repository, which makes the application can be easily migrated and deployed. Docker is a widely used container technology. At present, more and more applications are deployed in the form of micro-services and through containers, which gives great vitality to software development.
Similar to the relationship between micro-services and service governance, more and more applications are deployed through containers, resulting in a sharp increase in the number of containers on the cluster. Manual management and operation and maintenance of these containers has become very difficult and inefficient. In order to solve the relationship between many containers and containers, there are many orchestration tools, container orchestration tools can manage the entire lifecycle of containers. For example, docker-compose and docker swarm, which are officially issued by Docker, can start and orchestrate a batch of containers, but the function is relatively simple and not enough to support large container clusters. Based on a great deal of internal container management experience, Google opened up the Kubernetes project. Kubernetes (K8S) is designed for hundreds of millions of containers on the Google cluster, with strong container orchestration capabilities and rich functions. K8S defines a lot of resources, which are created declaratively and can represent a resource through JSON or YAML files. K8S supports multiple containers, but the mainstream is Docker containers. K8S provides relevant standards for container access, as long as containers implement this standard, they can be choreographed by K8S. As K8S has many functions, it will not be described here one by one. If you are interested, you can refer to the official documents or search for relevant articles on ATA.
When a company's application has been fully microserviced, the application can be deployed as a container. K8S can be deployed on the cluster, and the application container can be managed through the capabilities provided by K8S. Operation and maintenance can also work for K8S. Because K8S is the most widely used container orchestration tool at present, it has become a standard of container orchestration. At present, the group also has its own container and container orchestration tools.
Some new technologies have been derived for container management represented by K8S.
Cloud origin
Cloud natives have been very popular in the past two years, and you can see a lot of big talk about cloud natives everywhere. Here is the definition of cloud natives in CNCF:
Cloud native technology enables organizations to build and run scalable applications in new dynamic environments such as public, private, and hybrid clouds. Representative technologies of cloud natives include containers, service grids, microservices, immutable infrastructure, and declarative API.
These technologies can build loosely coupled systems with good fault tolerance, easy to manage and easy to observe. Combined with reliable automation, cloud native technology enables engineers to easily make frequent and predictable major changes to the system.
In my opinion, developing applications through micro-services, deploying them as containers, and using container orchestration tools such as K8S to manage container clusters make the development, operation and maintenance oriented to K8S, which is cloud native. Cloud native can easily build applications and fully monitor applications, as well as rapid expansion and reduction of applications for different traffic. As shown below:
Cloud native mainly consists of four parts:
Micro service container continuous integration and delivery of DevOps
PS: I always think that the term Cloud Origin is so abstract that it's hard for people to think of what it is by name.
Definition of ServiceMesh
I talked about micro-services, containers, container orchestration, cloud native, and so on. In fact, in order to figure out what ServiceMesh is, I summed up: SeviceMesh is a micro-service governance solution under cloud origin.
When we deploy applications on K8S in the form of containers through micro-services, there is actually a new solution for inter-service invocation and governance, which does not need to rely on the traditional micro-service governance framework. ServiceMesh implements a transparent proxy to the application by launching a Sidecar (side car) in its Pod for each application. All traffic in and out of the application first passes through the Sidecar of the application, the call between services is transformed into the call between Sidecar, and the governance of the service is transformed into the governance of Sidecar. In ServiceMesh, Sidecar is transparent and unaware of development. I always felt strange that if an application wants others to find it and call it, it must introduce some libraries and then actively register the service, otherwise it will do nothing. How can others know the existence of this application? why can it be omitted in ServiceMesh to make it completely transparent to developers? The implementation of this depends on the container orchestration tool. When the application is continuously integrated and delivered through K8S, after starting the application Pod, it has already registered its own services and declared the relationship between services with K8S in the form of Yaml files. ServiceMesh obtains all the service information on the cluster through communication with K8S, and achieves transparency to developers through K8S as an intermediary. As shown in the following figure, it is a basic structure of ServiceMesh, including the data plane and the control plane.
There are many advantages in this approach. We can find that the language applied in this mode will not have any impact on the whole service governance process. For ServiceMesh, it only cares about the Pod or the container instance in Pod, and it does not need to care about the implementation language applied in the container. Sidecar and the container it is responsible for are in the same Pod. In this way, ServiceMesh can achieve cross-language, which is also a major disadvantage of many traditional service governance frameworks, and the use of traditional service governance requires the introduction of a large number of dependencies on applications, which will lead to various conflicts between dependencies, and the group isolates various dependencies of applications through Pandora. In addition, the threshold of traditional service governance is high, which requires developers to have a certain understanding of the whole architecture, and it is more troublesome to find problems. This also results in a unclear boundary between development and operation and maintenance. In ServiceMesh, development only needs to deliver code, and operation and maintenance can maintain the entire container cluster based on K8S.
Current situation of development
According to the data currently consulted, the word ServiceMesh first appeared in 2016 and has been very popular in the last two years. Ant Financial Services Group already has his own complete ServiceMesh service framework-SofaMesh, and many teams of the group are also following up accordingly.
From the perspective of historical development, the way of program development has undergone a lot of changes, but the overall direction is becoming more and more simple. Now that we are developing in the group, we can easily build a service to support a larger QPS, thanks to the integrity and richness of the group's entire technology system and strong technology accumulation.
Let's take a look at what application development has experienced.
Direct stage of development history host
As shown in the figure above, this stage should be the oldest stage, in which two machines are directly connected through a network cable, and an application contains all the functions you can think of, including the connection management of the two machines. At this time, there is no concept of a network. After all, you can only communicate with machines directly connected through a network cable.
The emergence of the network layer
As shown in the figure above, with the development of technology, the network layer has emerged, and machines can communicate with all other machines connected through the network, which is no longer limited to the need to connect two machines directly.
Flow control integrated into the application
As shown in the figure above, because the environment and machine configuration of each application are different, and the ability to accept traffic is also different, when the traffic sent by An application is greater than that of B application, the packets that cannot be received must be discarded. At this time, the traffic needs to be controlled. At the beginning, the traffic control needs to be implemented by the application itself, and the network layer is only responsible for receiving and transmitting the data packets sent by the application.
Flow control is transferred to the network layer
As shown in the figure above, slowly, the network traffic control in the application can be transferred to the network layer, so the flow control in the network layer appears. I think it probably refers to the flow control of TCP, which can also ensure reliable network communication.
Integration of service discovery and circuit breakers in applications
As shown in the figure above, developers implement service discovery and circuit breakers in their own code modules.
Software packages / libraries dedicated to service discovery and circuit breakers
As shown in the figure above, developers implement service discovery and circuit breakers by referencing third-party dependencies.
The emergence of open source software dedicated to service discovery and circuit breakers
As shown in the figure above, service discovery and circuit breakers are implemented based on various middleware.
ServiceMesh appears
Finally, now, ServiceMesh is born, which further liberates the productivity and improves the efficiency of the whole software life cycle.
Although the competition in the ServiceMesh market did not begin to be understood by the world on a large scale until the end of 2017, and the competition in the micro-service market did not emerge until the end of 2017, in fact, ServiceMesh, a new force of micro-services, began to sprout as early as 2016:
In January 2016, William Morgan and Oliver Gould, infrastructure engineers who left Twitter, released version 0.0.7 of Linkerd on Github. the industry's first ServiceMesh project was born. Linkerd Twitter-based Finagle open source project, a large number of reuse of Finagle class libraries, but to achieve versatility, has become the industry's first ServiceMesh project. Envoy is the second ServiceMesh project, and both developed in about the same time, and both became CNCF projects one after another in 2017. When Istio 0.1 release was released on May 24, 2017, Google and IBM spoke in a high profile and the community responded enthusiastically, many companies stood in line to support Istio. The scenery of Linkerd was overshadowed in an instant, from a high-spirited teenager to an out-of-date Internet celebrity overnight. Of course, in terms of product maturity, linkerd, as one of the only two production-level ServiceMesh implementations in the industry, can continue to maintain the market before Istio matures. However, with the steady progress and maturity of Istio, coupled with the natural advantages of the second-generation ServiceMesh, it is only a matter of time before Istio replaces the first-generation Linkerd. Since deciding to commit to Istio in 2016, Envoy has started a quiet and steady development path, which is completely different from the ups and downs of Linkerd. In terms of function, because it is located in the data plane, Envoy does not need to consider too much, and a lot of work is done in the control plane of Istio. Envoy has since focused on doing a good job in the data plane and perfecting various details. In the market, Envoy and Linkerd are different in nature, and there is no strategic choice for survival and development, and there is no great pressure to confront the enemies of life and death.
Since Google and IBM jointly decided to launch Istio, Istio has been doomed to be in the forefront forever. Regardless of success or failure, after the appearance of Istio, there has been a lot of praise, especially for enthusiasts of ServiceMesh technology, which can be said to be a boost: Istio, which emerged in the name of a new generation of ServiceMesh, has obvious advantages over Linkerd. At the same time, there are a lot of dazzling features on the product roadmap. Over time, if Istio can successfully complete the development, stable and reliable, then it will be a very beautiful and great event to look forward to.
Istio introduction
Istio is currently the hottest ServiceMesh open source project. Istio is mainly divided into two parts: the data plane and the control plane. Istio implements micro-service governance under cloud origin, such as service discovery, traffic control, monitoring security, and so on. Istio implements transparent proxies by launching an application in a Pod and Sidecar. Istio is a highly extensible framework that supports not only K8S but also other resource schedulers such as Mesos. As shown in the following figure, it is the overall architecture of Istio:
Logically, it is divided into the data plane (the upper half of the image) and the control plane (the lower half of the image). The data plane consists of a set of intelligent agents (Envoy) deployed as Sidecar. These agents can regulate and control all network communication between microservices and Mixer. The control plane is responsible for managing and configuring agents to route traffic. In addition, the control plane configures Mixer to implement policies and collect telemetry data. Pilot is the abstraction layer of the entire architecture, abstracting and presenting the steps of interfacing with resource schedulers such as K8S in the form of adapters, and interacting with users and Sidecar. Galley is responsible for configuring resources for authentication. Citadel is used to generate identity, and the core functions of key and certificate management include flow control, security control, observability, multi-platform support, and customization. Mixer
Mixer is also an expandable module, which is responsible for collecting remote sensing data and integrating some back-end services (BAAS). Sidecar will constantly report its own traffic to Mixer. Mixer summarizes the traffic and displays it in a visual form. In addition, Sidecar can call some back-end service capabilities provided by Mixer, such as authentication, login, log, etc., and Mixer connects with various back-end services through adapters.
In-process Adapter form
The previous version of Isito adopted this form of integrating the back-end service adapter inside the Mixer. One problem with this approach is that a problem with a back-end service adapter will affect the entire Mixer module, but because the adapter and Mixer are integrated together, the method call will be very fast within the same process.
"class=" origin_image zh-lightbox-thumb lazy "width=" 1018 "/ >
Out-of-process Adapter form
The current version of Istio moves Adapter to the outside of Mixer, which can decouple Mixer from Adapter. When problems occur in Adapter, it will not affect Mixer, but there are also problems in this approach, that is, Adapter is two different processes outside Mixer, and the calls between them are made through RPC, which is much slower than the method calls within the same process, so the performance will be affected.
Pilot
Envoy API is responsible for communicating with Envoy, mainly sending service discovery information and flow control rules to Envoy. Abstract Model is an abstract model of services defined by Pilot to decouple from specific platform details and provide a foundation for cross-platform. Platform Adapter is the implementation version of this abstract model, which is used to interface with different external platforms such as k8s/mesos, etc. Rules API provides interfaces to external calls to manage Pilot, including the command line tool Istioctl and a possible third-party management interface in the future. Galley
Galley was originally only responsible for configuration verification, but was upgraded to the configuration management center of the whole control plane after 1.1. in addition to continuing to provide configuration verification functions, Galley is also responsible for configuration management and distribution. Galley uses Grid configuration Protocol (Mesh Configuration Protocol) to interact with other components.
Istio has created more than 50 CRD, which shows its complexity, so some people say that K8s-oriented programming is similar to yaml-oriented programming. The early Galley was only responsible for run-time verification of the configuration, and each component of the Istio control plane went to list/watch the configuration of its own concern. More and more complex configurations have caused a lot of inconvenience to Istio users, mainly as follows:
Configuration is lack of unified management, components are subscribed separately, and there is no unified rollback mechanism, so it is difficult to locate configuration problems. Configuration reusability is low, for example, before 1.1, each Mixer adpater needs to define a new CRD. The isolation of configuration, ACL control, consistency, abstraction, serialization and so on are not satisfactory.
With the evolution of Istio functions, the foreseeable number of Istio CRD will continue to increase. The community plans to strengthen Galley to the Istio configuration control layer. Galley will not only continue to provide configuration verification functions, but also provide configuration management pipelining, including input, transformation, distribution, and configuration Distribution Protocol (MCP) suitable for the Istio control plane.
Citadel
After splitting the service into micro-services, it not only brings various benefits, but also brings more requirements in terms of security than in the single era. After all, the invocation mode between different functional modules has changed from the method call in the original single architecture to the remote call between micro-services:
Encryption: in order not to divulge information and to resist man-in-the-middle attacks, inter-service communications need to be encrypted. Access control: not every service is allowed to be accessed arbitrarily, so it is necessary to provide flexible access controls, such as two-way TLS and fine-grained access policies. Audit: if you need to audit which users in the system have done what, you need to provide an audit function.
Citadel is the component responsible for security in Istio, but Citadel needs to work with several other components to do its job:
Citadel: used for key management and certificate management, and sent to Envoy and other components responsible for communication forwarding. Envoy: use keys and certificates issued from Citadel to secure communication between services. Note that Localhost is used between applications and Envoy, usually without encryption. Pilot: distribute authorization policy and security naming information to Envoy. Mixer: responsible for managing authorization, completing audit, etc.
The security class features supported by Istio are:
Traffic encryption: encrypts the traffic between services. Authentication: powerful inter-service and end-user authentication is provided through built-in identity and credential management, including transport authentication and source authentication, and two-way TLS is supported. Authorization and authentication: provides role-based access control (RBAC), namespace-level, service-level and method-level access control.
Original text link
This article is the original content of Yunqi community and may not be reproduced without permission.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.