In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to implement Service Mesh and several common Service Mesh implementation schemes. In view of this problem, this article introduces the corresponding analysis and solutions in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
How to implement the next generation micro service standard in Service Mesh mode, and introduce several common Service Mesh implementation schemes at present.
Micro-service 1.0 era
Dubbo is essentially a service governance framework, not a micro-service framework. Although support for Spring Cloud and Service Mesh will be provided in future Dubbo 3.0, Dubbo alone will not be able to build a complete micro-service architecture.
Spring Cloud implements a relatively complete micro-service technology stack by integrating a large number of components, but the implementation of Spring Cloud has a strong code intrusion, and only supports Java language, can not support systems developed in other languages. The Spring Cloud family bucket includes a lot of content, and the learning cost is relatively high. For the old system, the cost of upgrading or replacing the framework is high, which makes some development teams unwilling to bear the technical and time risks and costs, which makes the micro-service solution encounter a lot of difficulties when landing.
To sum up, the main problems of the Micro Services 1.0 era mainly include the following three aspects:
High technical threshold: in the process of implementing micro-services, in addition to basic service discovery, configuration center and authentication management, the development team faces many challenges in service governance, including load balancing, circuit breaker degradation, grayscale release, failover, distributed tracking, etc., which puts forward very high technical requirements for the development team.
Lack of multilingual support: for Internet companies, especially fast-growing Internet startups, multilingual technology stacks and cross-language service calls are also normal. however, at present, there is not a unified and cross-language micro-service technology stack in the open source community, and cross-language invocation is one of the important features to be implemented at the beginning of the birth of the concept of micro-services.
Code intrusiveness: mainstream micro-service frameworks such as Spring Cloud and Dubbo are intrusive to business code to a certain extent, and the cost of technology upgrade and replacement is high, resulting in low willingness of the development team to cooperate, and it is difficult for micro-services to land.
Micro-service 2.0 era
In order to solve many problems in the era of microservices 1. 0, the concept of Service Mesh began to come into the eyes of developers.
Before introducing the concept of Service Mesh, let's take a look at Sidecar. Sidecar is named after the military side fighting vehicle active on the battlefield during World War I (it is also one of the most common props in our anti-Japanese drama). Sidecar is an important part of Service Mesh. Sidecar refers to the side fighting car mode in the software system architecture. The essence of this model is to realize the decoupling of the data plane (business logic) and the control plane.
In the Service Mesh architecture, deploy a Sidecar Proxy for each microservice instance. The Sidecar Proxy is responsible for taking over the inbound and outbound traffic of the corresponding service, and removes the service subscription, service discovery, circuit breaker, flow restriction, degradation, distributed tracking and other functions in the micro-service architecture from the service to the Proxy.
Sidecar starts as an independent process. Each host can share the same Sidecar process, or each application can own a Sidecar process. All the service governance functions are taken over by Sidecar, and the external access of the application only needs to access the Sidecar. When the Sidecar is deployed in a large number of microservices, these Sidecar nodes naturally form a service grid.
The concept of microservices was first proposed by Martin Fowler in March 2014, while the concept of Service Mesh was proposed around 2016. Service Mesh has also experienced the second generation of development.
The representatives of the first generation of Service Mesh are Linkerd and Envoy. Linkerd, which is based on Twitter's Fingle and written in Scala, is the industry's first open source Service Mesh scheme that has been validated in a long-term actual production environment. The underlying layer of Envoy is based on Cellular clients, and its performance is better than that of Linkrd using Scala. At the same time, the maturity of the Envoy community is high, and the commercial stable version has been available for a long time. These two open source implementations are based on Sidecar, and most of them focus on how to do a good job of Proxy and complete some general control plane functions. But when you deploy a large number of Sidecar in the container, how to manage and control these Sidecar itself is not a small challenge.
The main improvements of the second generation Service Mesh focus on the more powerful control plane function (the corresponding Sidecar Proxy is called the data plane). The typical representatives are Istio and Conduit. Istio is an open source project cooperated by Google, IBM and Lyft. It is the most mainstream Service Mesh solution at present, and it is also the second generation Service Mesh standard. In Istio, you directly think of Envoy as a Sidecar. Except for the control plane components in Sidecar,Istio, they are all written in the go language.
Introduction to Istio
According to the official documentation of Istio, Istio provides the following key functions in the service network:
Traffic management: controls the flow of traffic between services and the flow of API calls, making calls more reliable and making the network more robust in bad situations.
Observability: understanding the dependencies between services, as well as the nature and flow of traffic between them, provides the ability to quickly identify problems.
Policy enforcement: apply organizational policies to the interaction between services to ensure that access policies are enforced and resources are well allocated among consumers. Policy changes are made by configuring the grid rather than modifying the application code.
Service identity and security: provide verifiable identity for services in the grid and the ability to protect service traffic so that it can flow over networks with different credibility.
Platform support: Istio is designed to run in a variety of environments, including cross-cloud, Kubernetes, Mesos, etc. Initially focus on Kubernetes, but will soon support other environments.
Integration and customization: policy enforcement components can be extended and customized to integrate with existing ACL, logging, monitoring, quota, auditing, and other solutions.
Istio is designed for scalability to meet different deployment needs. These functions greatly reduce the coupling between application code, underlying platform and policy, and make micro-services easier to implement.
The following figure shows the architectural design of Istio, which mainly includes Envoy, Pilot, Mixer, Istio-Auth and so on.
Envoy: acts as Sidecar, coordinates the inbound and outbound traffic of all services in the service grid, provides the capabilities of service discovery, load balancing, current-limiting circuit breaker, and collects performance metrics related to traffic.
Pilot: responsible for lifecycle management of Envoy instances deployed in Service Mesh. In essence, it is responsible for traffic management and control, decoupling traffic from infrastructure extensions, which is the core of Istio. Think of Pilot as the Sidecar that manages the Sidecar, but this particular Sidacar does not carry any business traffic. Pilot allows operators to specify through Pilot what rules they want traffic to follow, rather than which specific pod/VM should receive traffic. With the Pilot component, we can easily implement the Amax B test and the canary Canary test.
Mixer: Mixer provides a common mediation layer between the application code and the infrastructure backend. Its design moves policy decisions out of the application layer and replaces them with configurations that can be controlled by operators. Instead of integrating application code with a specific backend, the application code integrates fairly simply with Mixer, and then Mixer is responsible for connecting to the back-end system. Mixer can be thought of as a Sidecar Proxy for other back-end infrastructure, such as databases, monitoring, logs, quotas, and so on.
Istio-Auth: provides powerful inter-service authentication and end-user authentication, using interactive TLS, built-in identity and certificate management. You can upgrade unencrypted traffic in the service grid and provide operators with the ability to enforce policies based on service identity rather than network control. Future versions of Istio will add fine-grained access control and auditing to control and monitor visitors to services, API, or resources using various access control mechanisms, including attribute and role-based access control and authorization hooks.
The advanced design concept and powerful functions of Istio, coupled with the influence of Google and IBM make Istio spread rapidly, which makes the majority of developers realize the importance of Istio project in the field of Service Mesh. However, there are some shortcomings in the current version of Istio:
Most of the capabilities of current Istio are strongly related to Kubernetes. When building micro-services, we often want the service layer to be decoupled from the container layer, and the service layer needs to be able to interface with a variety of container layer platforms in design.
There is no stable version of Istio, and as of this writing, the latest version of Istio is version 0.8, which is expected to be released in 2018.
Introduction to Conduit
Let's take a look at the implementation of Conduit. The following figure is the architecture design diagram of Conduit, which is mainly composed of Conduit Data Plane and Conduit Control Plane:
The design philosophy of Conduit is very similar to that of Istio. The author rewrote Sidecar, called Conduit Data Plane, in Ruth language, and the control plane was taken over by Conduit Control Plane written in GE language. From the perspective of Conduit's architecture, the author claims that Conduit has learned a lot of lessons from Linkerd, which is faster, lighter, simpler and more powerful in control plane than Linkerd. Compared with Istio, the architecture of Conduit is simpler on the one hand and focused enough on the problem to be solved on the other.
Introduction to Serverless
Serverless was translated as "serverless architecture", a concept that existed in 2012, earlier than the concepts of micro-services and Service Mesh, but it was not until the concept of micro-services became popular that Serverless was back in the spotlight.
Serverless (serverless architecture) does not mean that there is no server to run the code, Serverless does not need to manage the server, only needs to focus on the code, and the provider will handle the rest of the work. " Serverless architecture can also refer to applications where part of the server-side logic is still written by application developers, but unlike traditional architectures, this logic is run entirely by third parties and event-triggered Stateless is temporarily stored in the computing container.
For developers, Serverless architecture can decompose its server-side application into several functions that perform different tasks, and the whole application is divided into several independent, loosely coupled components that can run on any scale.
The following figure shows a common Serverless architecture diagram in which all services are provided as a function as a service (FaaS). In terms of language and environment, FaaS functions are regular applications, such as using languages such as JavaScript, Python, and Java.
Advantages of Serverless architecture
Reduce delivery time: the Serverless architecture allows developers to deliver new applications in a very short period of time (hours, days), rather than weeks or months as before. In new applications, there are many examples of relying on third-party API to provide services, such as OAuth, social, mapping, artificial intelligence, and so on.
Enhanced scalability: everyone wants their applications to quickly get a large number of new users, but when the number of active users grows rapidly, the pressure on the server will also surge. Systems that use Serverless architecture no longer have these concerns and can be expanded in a timely and flexible manner to cope with the access pressure caused by the fast-growing number of active users.
Reduce costs: the Serverless architecture model reduces computing power and human resources costs. If you don't need servers, you don't have to spend time rebuilding wheels, risk monitoring, image processing, and infrastructure management, and operating costs will plummet.
Improve the user experience: users tend to focus less on infrastructure and more on functionality and user experience. The Serverless architecture allows teams to focus resources on the user experience.
Reduce latency and optimize geolocation information: application scale capability depends on three aspects: the number of users, location, and network latency. When applications are aimed at national or even global users, there is usually a high latency, which reduces the user experience. Under the Serverless architecture, the supplier has nodes near each user, which greatly reduces the access latency, so the experience of all users can be improved.
For scenarios with large-scale deployment of micro-services and high heterogeneity of internal services, using Service Mesh solution is a good choice. Service Mesh realizes the decoupling of business logic and control, but it also brings additional overhead. Due to the extra hop in the network, it increases the loss of performance and access delay. At the same time, because each service needs to deploy Sidecar, this will also make the distributed system which already has a certain complexity more complex. Especially in the initial stage of implementation, the management and operation and maintenance of Service Mesh will be a thorny problem. Therefore, when we choose to use the Service Mesh architecture, we need to make full technical preparation and experience accumulation for the specific Service Mesh implementation scheme (such as Istio) in order to ensure the smooth implementation of the scheme.
With the popularity of micro-service and container technology, Serverless (serverless architecture) has become a new hot spot. Serverless cloud functions allow users to go online and operate without worrying about the deployment and operation of servers, as long as they develop the core business logic, with distributed disaster recovery capabilities, and can automatically expand and scale capacity according to load, and charge according to the actual number of calls and duration.
The use of Serverless architecture can eliminate all operational operations, developers can pay more attention to the development of core business, achieve rapid online and iterative, and grasp the pace of business development. Serverless architecture can be regarded as a supplement to micro-services and containerization, providing users with a new choice to deal with complex and ever-changing needs, especially for fast-growing start-ups.
The answers to the questions about how Service Mesh is implemented and several common Service Mesh implementation solutions are shared here. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.