Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Istio best practice: grayscale publishing through Istio service grid on K8s

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Istio is another open source work of Google after Kubernetes, and the main participating companies include Google,IBM,Lyft and other companies. It provides a complete non-intrusive micro-service governance solution, including key capabilities such as micro-service management, network connection and security management, and can achieve micro-service load balancing without modifying any code. authentication, authorization and monitoring between services. From the perspective of the entire infrastructure, it can be understood as a supplement to a micro-service management platform on the PaaS platform.

Schematic diagram of Istio architecture

Istio and Kubernetes

Kubernetes provides deployment, upgrade and limited running traffic management capabilities; use the mechanism of service to register, discover and forward services, and have certain forwarding and load balancing capabilities through kubeproxy. However, it does not have the ability of upper layer, such as circuit breaker, current limit degradation, call chain management and so on.

Istio well makes up for this part of K8s' ability in micro-service governance, and it is also built on K8s, but it is not a completely new set like SpringCloud Netflix. Istio is a key part of Google's microservices governance.

Istio is closely integrated with K8s, including: Sicecar runs in K8s pod and is deployed as a proxy and business container, and the deployment process is transparent to users. Mesh requires that the operation of business programs is not aware of the existence of sidecar, the design of pod based on k8sd is more thorough, more transparent to users, and users are not even aware of the process of deploying sidecar. Imagine that it would not be so convenient to deploy an agent on VM.

Pilot contains a controller, which automatically discovers services and endpoints in K8S through list/watch kube-apiserver. It listens to events by registering a controller in Kubernetes to obtain the relationship between Service and Kubernetes's Endpoint and Pod. However, at the forwarding level, instead of using kube-proxy to forward, it converts these mapping relationships into pilot's own forwarding model and sends them to envoy for forwarding.

K8s orchestration container service has become a de facto standard. Because of the matching between micro-service and container in light weight and rapid deployment of operation and maintenance, micro-service running in container is becoming a standard practice. For cloud native applications, using Kubernetes to build micro-service deployment and cluster management capabilities, and Istio to build service governance capabilities will gradually become the standard configuration for application micro-service transformation.

Comparison of using Istio service grid on self-managed Istio and CCE

Huawei Cloud Container engine CCE (Cloud Container Engine) provides highly reliable and high-performance enterprise-class container application management services, supports native Kubernetes community applications and tools, and simplifies the construction of automatic container running environment on cloud:

Simple and easy to use: automatic creation of container clusters, one-stop deployment / operation and maintenance of container applications, one-click rolling upgrade

High performance: self-developed high-performance container network, auto auto-scaling in seconds, and support private clusters of high-performance bare metal containers

Enterprise: high availability of cluster control plane HA and cross-AZ, elegant scalability of container applications, secure offline, to ensure that the business will not go offline.

Open and compatible: compatible with Kubernetes/Docker community native version, CNCF certified Kubernetes service provider, major contributor to the community

The following compares the differences in the use of Istio between self-built Istio and Huawei Cloud CCE from multiple dimensions of installation, operation management and monitoring:

Self-built

Huawei Cloud CCE

Istio package management

Users download and manage by themselves

Users do not perceive

Running configuration

Users configure their own running environment and dependencies

Users do not perceive

Istio installation

Users explore and install on their own

Users do not need to pay attention to the details and enable it as needed when creating a cluster.

Sidecar injection

Users explore, develop and configure themselves

Users do not perceive

Istio upgrade

Users explore and develop upgrade solutions that do not affect their business.

Provide a complete solution to upgrade the control plane and data plane as needed

Application call chain

Users explore, develop, install and configure themselves

Interface with Huawei Cloud APM/AOM service to provide request call chain tracking and viewing capability

Application Topology

Users explore, develop, install and configure themselves

Interface with Huawei Cloud APM/AOM service to provide the ability to view application topology

Performance monitoring

Users explore, develop, install and configure themselves

Interfacing with Huawei Cloud APM/AOM service to provide real-time performance status monitoring of request response delay

Deployment and management practices of cloud native applications on CCE Cloud native applications, cloud platforms and micro-service architectures

Cloud native applications refer to applications designed and developed natively for deployment and operation on the cloud platform. To be fair, most traditional applications can run on the cloud platform without any changes, as long as the cloud platform supports the computer architecture and operating system on which the traditional application runs. However, this operation mode only uses the virtual machine as a physical machine, and can not really take advantage of the ability of the cloud platform.

The core capability of cloud computing platform is to provide the ability to allocate resources and elastic computing on demand, and the design idea of cloud native applications is to enable applications deployed to cloud platforms to make use of the capabilities of cloud platforms to achieve on-demand use of computing resources and elastic scaling.

Micro-service architecture is an architecture model for the implementation of enterprise distributed systems, that is, a complex single application is divided into multiple independently deployed components according to the limited context of the business. These independently deployed components are called microservices. When talking about the relationship between cloud native applications and micro-service architecture, it can be viewed from two perspectives depending on the context.

1) Macro cloud native application, that is, the whole distributed system is regarded as one application. From this point of view, micro service architecture is an architecture model to realize cloud native application.

2) Micro cloud native application, that is, each micro service is an application. In this context, each micro service should be designed according to the design concept of cloud native application (such as the well-known cloud native 12 elements). In order to truly achieve the goal of the micro service architecture, that is, to enable distributed systems to use computing resources and scale flexibly as needed.

In Huawei Cloud CCE Container Service, we call macro cloud native applications "applications" and micro level cloud native applications as "components". We use these two concepts to manage distributed applications:

Figure: relationship between applications, components, and workloads

Practice of Cloud Native Application Management on CCE to create Kubernetes Cluster

Before creating an application, you need to prepare a Kubernetes cluster (version 1.9 or above) and enable Istio service grid governance. Log in to the CCE console, click "Resource Management > Virtual Machine Cluster" in the left navigation bar, and click "create Kubernetes Cluster" in the "Virtual Machine Cluster" interface to configure step by step according to the guide:

Figure 1: creating a Kubernetes cluster

Figure 2: enable service grid, one-click installation of Istio and automatic application of sidecar injection:

Other cluster creation steps and configurations are the same as creating virtual machine clusters on existing CCE.

Create cloud native applications

Here we take the bookinfo sample application of the Istio open source community as an example, which includes four micro services: ProductPage, Reviews, Details and Ratings. The topology and network access information are as follows:

Select "Application Management" in the left navigation bar of CCE, click "create Application", and select "guided creation" (in the future, it will support the one-click creation of micro-service applications and its traffic policy configuration through helm templates, which is more convenient to manage). The configuration is divided into three parts: application basic information, in-grid component definition, and external release routing configuration.

1. First define the basic information of the application: name, select the cluster and namespace:

2. In the second step, click add components, and add components to the service grid according to the above application topology and network design:

A. add the ratings micro-service component (the listening port in the container is 9080, and the internal access port open to service mesh is also configured as 9080, which can be adjusted according to the client configuration)

1) basic information of configuration components:

2) Select the load image and configure the version number as v1

3) Click "next". In the advanced load configuration, you can choose to configure upgrade policy, scale down policy, custom monitoring, etc. We do not make configuration here, but click "add":

You can see that we have added a micro-service component to the grid for bookinfo

B, add micro-service components

Refer to the steps above to add ratings, and add reviews:

C, add details micro-service components

Refer to the steps above to add a details micro-service component:

D, add productpage micro-service components

3. Finally, configure the access route that the application is open to the public. As can be seen from the topology design above, productpage is used as the access entry:

A. Click "add Application access method"

Select components that are open to external access, and configure open ports

The configured access method information is as follows:

Finally, click "create" in the lower right corner to launch the application. You can see the newly created distributed micro-service application bookinfo and its included micro-service components in the application list:

Access productpage through the open access portal of the application:

The practice of using Istio for Grayscale Publishing on CCE enable Istio service grid on the cluster with one click

If an application under a cluster needs to do micro-service governance, you only need to click to enable the service grid when creating the cluster, and you do not need to download Istio image, configure yaml, install, upgrade and other complex infrastructure independent of the application business:

Develop and package a new version

Below, we take the development of a new version of the cloud container image service (initial container image version 1.5.0) as an example, the new version image version is 1.5.0-v2, and it has been uploaded to Huawei Cloud Container Image Service (SWR) through docker push on the local development machine:

The new version adds calls to the ratings micro-service based on the current version and supports star-level display.

Release grayscale version and configure grayscale policy

Now we plan to upgrade smoothly in the existing network by means of grayscale release. On the application list page, expand the component information under bookinfo, and select "add Grayscale version" of the microservice component:

Start the grayscale version: configure the grayscale version v2, confirm the image version (the system will select the latest version by default), and click "start load" to start the grayscale version. The advanced configuration of the container inherits the existing version by default.

Observe the running status of the grayscale version and configure the grayscale policy: allocate the proportion of the grayscale version traffic according to the proportion (take 20% as an example here). After the load starts successfully, click "submit Policy":

When you go back to the component list, you can see that the microservice has been published in grayscale:

The comparison of the traffic before and after the grayscale release of the review service is as follows:

Initial version:

Grayscale status: as shown, the review v2 version invokes the ratings service to obtain star ratings and diverts 20% of the traffic to this version

When you visit productpage, you can see that some requests show star ratings, some requests are still the same as the old version (that is, there is no star rating as a new feature), and the proportion is close to 1:4.

Some of the access results are the original pages:

Some of the access results are pages with star rating features:

Continuously observe the running status of the grayscale version and switch the traffic

Next, we will continue to observe the running status of the grayscale version. After confirming that the business processing and performance meet the requirements, we can choose to gradually increase the traffic proportion of the grayscale version, and then further divert all the traffic to the grayscale version:

Observe the health and performance status:

Click "Operation and maintenance Center" on the left navigation bar of CCE to enter AOM service:

Select "Metrics"-> "Application" menu to continuously observe the health status and performance status of review service grayscale version v2:

Observe the call chain and request response delay: in the CCE application management, click the bookinfo application to view the details. You can see that the CCE service provides the request call chain tracking capability, which can quickly locate distributed abnormal requests (open source zipkin and grafana capabilities are currently available, and will be connected to Huawei Cloud AOM service later to provide corresponding capabilities)

You can see that the V2 version is working normally, so the next step is to gradually expand the traffic ratio, and finally direct all the traffic to the grayscale version. In the cce service, click the component name to go to the component details page, and click "take over all Traffic":

The system prompt will take over all traffic from the original version, click the OK button:

All traffic accessing reviews is now directed to v2:

Access to productpage, all requests will show a star rating feature:

Finally, we remove the old version (V1) from the platform:

After clicking OK, you can see that only the v2 version of the microservice is running:

More micro-service traffic governance through Istioctl tools

On the basis of the above rules, cce Istio Service Grid also provides Istioctl command line tools to achieve more comprehensive traffic governance capabilities, such as current limiting, circuit breaker, connection pool management, session persistence and so on. Go to "Resource Management"-> "Virtual Machine Cluster", click the cluster you want to manage, and you can see the download and usage instructions on the Istio command line:

Summary

Ju Chang Cloud CCE Container engine + Istio + Application Operation and maintenance AOM/APM + Container Image Service SWR provides a complete lifecycle management full stack service for cloud native applications from development, deployment, launch and monitoring, making it easier for enterprises to go to the cloud and run more efficiently.

At present, the capability of Istio service grid has been opened for public testing. You can quickly apply for public testing through the link below. Huawei Cloud Container Service experts will do their best to serve you and protect your success.

Application link: https://console.huaweicloud.com/cce2.0/?region=cn-north-1#/app/istio/istioPublicBeta

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report