Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes+Docker+Istio Container Cloud practice

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

With the progress of society and the development of technology, people have a more urgent need for the efficient use of resources. In recent years, with the rapid development and maturity of the Internet and mobile Internet, the micro-service of large applications has also attracted the enthusiastic attention of enterprises, and the container cloud scheme based on Kubernetes+Docker has also entered the field of vision of the public. Kepler Cloud is a Kubernetes+Docker+Istio-based micro-service governance solution.

1. Microservices1.1 solves the problems after the micro-service of large applications.

Now all the major enterprises are talking about micro-services. under the general trend of micro-services, everyone in the technology circle must talk about micro-services and various solutions after micro-services.

1.2 what are we talking about when we are talking about microservices?

There are many good reasons to use the microservice architecture, but there is no such thing as a free lunch. Microservices have many advantages but also add complexity. The team should actively deal with this complexity as long as the application can benefit from micro-services.

1.2.1 the problem of how to microservice how to split the business API rules data consistency guarantee later scalability consideration

Of course, this is not the main issue discussed in this article. I will not talk about how to split microservices. The situation of each enterprise and each application is different. The solution that suits you is the best one. We mainly come to solve some of the problems brought about by micro-service.

1.2.2 the problem caused by microservice is how to achieve environmental consistency on how to allocate resources quickly, how to deploy quickly, how to do basic monitoring service registration and discovery, how to do load balancing.

The above are the basic problems that need to be solved in the micro-service of large applications. If the virtual machine is still used in the traditional way, the resource expenditure will be very large. So how to solve these problems? For example:

Traffic management service degrades authentication and authorization

Of course, in the face of these problems, our ape friends must have solutions.

1.3 Service governance1.3.1 Java system

Suppose we are the application of the Java system, that is very convenient to solve, for example, we can consider using the SpringCloud family bucket series. You can also split and use:

EurekaHystrixZuulSpring-cloudSpring-bootZipKin

Under the Java system, it is very convenient to do the basic part of our micro-service, but it is still not very comfortable to solve the environmental consistency, and if there are other language services, it will be difficult to integrate.

Let's take a look at the general combination of basic programming languages to solve basic problems.

1.3.2 other system ConsulKongGo-kitJaeger/Zipkin

Suppose we are using the Golang language, let's take a look at the Golang language. The go language is simply a language born for micro-services, so it's not too convenient. Efficient development speed and quite good performance, simple and intrepid.

Digress ~ We can also form a good set of micro-service architecture by using the above tools.

Consul: as a service discovery and configuration center to make Kong: as a service gateway Jaeger: as a link trace to make Go-kit: development components

But this kind of solution also has the problem, the invasion to the service is too strong, each service needs to embed a lot of code, this is still a headache.

2. Docker & Kubernetes

The practical scheme of building platform based on Docker+k8s.

2.1 Docker

Docker is a very powerful container.

Improvement of resource utilization environment consistency, portability, rapid expansion and scalable version control

After using Docker, we found that there are more things to play with and more flexible. Not only does resource utilization improve and environmental consistency is guaranteed, but version control also becomes more convenient.

In the past, we used to build with Jenkins, and when we need to roll back, we need to go through the jenkins Build process again, which is very troublesome. If it is a Java application, it will take a very long time to build.

After using Docker, all this becomes simple, just pull down the image of a certain version and start it (if there is a local cache to start a version directly), this promotion is very efficient.

(photo source network)

Now that the Docker container is used as the basis for the service, we certainly need to orchestrate the container, which would be terrible without it. For Docker container choreography, we have a variety of options: Docker Swarm, Apache Mesos, Kubernetes, among these orchestration tools, we choose Kubernetes, the king of service orchestration.

2.1.1 Docker VS VM

VM: it takes 1 minute to create a virtual machine, 3 minutes to deploy the environment, and 2 minutes to deploy code. Docker: start the container within 30 seconds. 2.2 Why choose Kubernetes

Let's compare these three container orchestration tools.

2.2.1 Apache Mesos

The goal of Mesos is to build an efficient and scalable system that can support a variety of frameworks, both current and future. This is also a big problem today: frameworks like Hadoop and MPI are independent, which makes it impossible to do some fine-grained sharing between frameworks.

But its basic language is not Golang, not in our technology stack, our maintenance costs will increase, so we ruled it out in the first place.

2.2.2 Docker Swarm

Docker Swarm is a scheduling framework developed by Docker. One of the benefits of being developed by Docker itself is the use of standard Docker API. The architecture of Swarm consists of two parts:

(photo source network)

Its use will not be introduced in detail here.

2.2.3 Kubernetes

Kubernetes is an orchestration system for Docker containers that uses the concepts of label and pod to convert containers into logical units. Pods is a collection of co-located containers that are co-deployed and scheduled to form a service, which is the main difference between Kubernetes and the other two frameworks. Compared with similarity-based container scheduling methods (such as Swarm and Mesos), this method simplifies cluster management.

Not only that, it also provides a very rich API, making it easy for us to operate it and play more tricks. In fact, there is also a major focus is in line with our Golang technology stack, and has the support of large manufacturers.

The specific use of Kubernetes is no longer introduced here, there are a lot of materials on the website for reference.

2.3 Kubernetes in kubernetes

Kubernetes (k8s) is an open source platform for automating container operations, including deployment, scheduling, and extension between node clusters.

Automate container deployment and replication expand or shrink container size at any time to organize containers into groups and provide load balancing between containers it is easy to upgrade a new version of the application container to provide container flexibility, replace it if the container fails, etc. 2.4 Kubernetes is not enough either

So far, we have solved the following problems:

Docker: environment consistency, fast deployment. Kubernetes: service registration and discovery, load balancing, rapid allocation of resources.

And, of course, surveillance. We'll talk about that later. Let's first see what to do to solve some higher-level problems.

How to solve the problems of service authentication, link tracking, log management, circuit breaker, traffic management, error injection and so on without intrusive code modification to the service?

A solution has been very popular in the past two years: Service Mesh.

III. Service Mesh

The infrastructure layer that handles inter-service communication for reliable request delivery in the complex service topologies of cloud native applications.

The dedicated infrastructure layer used to handle inter-service communication makes the request delivery process more reliable through complex topologies. As a group of lightweight high-performance network agents, deployed with the program, the application does not need to know its existence.

Reliably delivering requests in cloud native applications can be very complex, and this complexity can be managed through a series of powerful technologies: link fusing, latency awareness, load balancing, service discovery, service renewal, and offline and culling.

There are many ServiceMesh frames on the market, and we chose the Istio standing in the tuyere.

3.1 Istio

An open platform for connecting, managing, and protecting microservices.

Platform support: Kubernetes, Mesos, Cloud Foundry. Observability: Metrics, logs, traces, dependency. Visualisation . Service Identity & Security: provides a verifiable identity for service, service-to-service authentication. Traffic management: dynamic control of communication between services, ingress / egress routing, fault injection. Policy execution: prerequisite check, quota management between services. 3.2 Why did we choose Istio?

Because there are big factories to support ~ in fact, the main idea is that it is quite good.

Although it was only version 1.0, we tried it from version 0.6, the test environment ran, then version 0.7.1 came out, we upgraded to version 0.7.1, and then 0.8.0LTS came out, and we officially started using version 0.8.0 and made a set of upgrades.

At present, the latest version has reached 1.0.4, but we are not going to upgrade it. I want to wait until it is upgraded to 1.2 before starting a formal large-scale application. 0.8.0LTS is fine on a small scale right now.

3.3 Istio Architectur

Let's first take a look at the architecture of Istio.

The Istio control panel is mainly divided into three parts, Pilot, Mixer and Istio-Auth.

Pilot: mainly as a service discovery and routing rule, and manages all Envoy, it consumes a lot of resources. Mixer: mainly responsible for policy requests and quota management, as well as Tracing, all requests are reported to Mixer. Istio-Auth: upgrade traffic, authentication and other functions. Currently, we do not enable this feature, and the demand is not particularly large, because the cluster itself is isolated from the outside.

Each Pod is injected with a Sidecar, and all the traffic in the container is transferred to Envoy for processing through the iptables.

4. Kubernetes & Istio

Istio can be deployed independently, but it is obviously a better choice in combination with Kuberntes. Small-scale architecture based on Kubernetes. Some people are worried about its performance, in fact, after production testing, tens of thousands of QPS is completely no problem.

4.1 Kubernetes Cluster

What is our K8s cluster like when resources are scarce?

4.1.1 Master Cluster

Master Cluster:

ETCD 、 Kube-apiserver 、 kubelet 、 Docker 、 kube-proxy 、 kube-scheduler 、 kube-controller-manager 、 Calico 、 keepalived 、 IPVS . 4.1.2 Node Node

Node:

Kubelet 、 kube-proxy 、 Docker 、 Calico 、 IPVS .

(photo source network)

The API of the Master we call is managed through the keepalived. If a master fails, it can ensure a smooth drift to the API of other master, without affecting the operation of the entire cluster.

Of course, we also configure two edge nodes.

4.1.3 Edge Node edge node traffic ingress

The main function of edge nodes is to enable the cluster to provide nodes that expose service capabilities, so it does not need to be stable. Our IngressGateway is deployed on these two edge nodes and managed through Keeplived.

4.2 external service request process

The outermost layer is DNS, which transfers the traffic to the VIP,VIP of the cluster and then to the HAproxy of the cluster through universal parsing to Nginx,Nginx, and sends the external traffic to our edge node Gateway.

Each VirtualService is bound to Gateway, through which VirtualService can be used for service load, current limitation, fault handling, routing rules and canary deployment. And then through the Service to the Pods where the service is located.

This is a process without Mixer and policy checking, using only Istio-IngressGateway. It will change if you use all the Istio components, but the main process is still the same.

4.3 Logging

Log collection we use is a low-coupling, scalable, easy to maintain and upgrade scheme.

The node Filebeat collects host logs. Each Pods injection Filebeat container collects business logs.

Filebeat is deployed with the application container, and the application does not need to know that it exists, just specify the directory where the log is entered. The configuration used by Filebeat is to read from ConfigMap, and only the rules for collecting logs need to be maintained.

The picture above is that we can see the collected logs from Kibana.

4. 4 Prometheus + Kubernetes monitoring system based on time series. Seamlessly integrate infrastructure and application levels with kubernetes. A powerful key-value data model. Big factory support.

4.4.1 Grafana

4.4.2 Alarm

At present, the alarms we support are Wechat, kplcloud, Email and IM. All alarms can be configured to be sent to various places on the platform.

4.4.3 overall architecture

The whole architecture consists of peripheral services and basic services in the cluster. The external services include:

Consul is used as a configuration center. Prometheus+Grafana is used to monitor the K8s cluster. Zipkin provides self-defined link tracking. ELK logs are collected and analyzed, and all logs in our cluster are pushed here. Gitlab code repository. Jenkins is used to build code and package it into a Docker image and upload it to the repository. Repository image repository.

The clusters are:

HAProxy+keeprlived is responsible for traffic forwarding. The network is Calico, and Calico has beta-level support for kube-proxy 's ipvs proxy mode. If Calico detects that kube-proxy is running in this mode, Calico ipvs support is automatically activated, so we enabled IPVS. The DNS within the cluster is CoreDNS. We deployed two gateways, mainly using Istio's IngressGateway,TraefikIngress backup. Once IngressGateway is down, we can quickly switch to TraefikIngress. Above are the relevant components of Istio. Finally, there is our APP service. The cluster collects logs through Filebeat and sends them to an external ES. The monitoring within the cluster includes: State-Metrics is mainly used for automatic scaling monitoring component Mail&Wechat self-developed alarm service Prometheus+Grafana+AlertManager cluster monitoring, the main monitoring service and related basic components InfluxDB+Heapster flow database stores the monitoring information of all services 4.5 how to deploy applications with Kubernetes? 4.5.1 R & D is packaged into images, repositories, and managed versions to learn Docker. Learn to configure the warehouse, manually package and upload trouble. Learn the knowledge about K8s. 4.5.2 using Jenkins to be responsible for packaging, transferring images, and updating version operation and maintenance work has increased a lot. If the application needs to be configured and the service needs to be changed, you have to find the operation and maintenance staff. You need to manage a bunch of YAML files.

Is there a fool-like solution that is easy to use without learning too much technology?

5. Kplcloud platform5.1 Kepler Cloud platform

Kepler Cloud platform is a lightweight PaaS platform.

Provide a controllable management platform for micro-service projects. Implement the independent deployment, maintenance and expansion of each service. Simplify the process, no longer need the tedious application process, maximize automatic processing. Realize the rapid release, independent monitoring and configuration of micro services. Achieve zero-intrusive service discovery, service gateway, link tracking and other functions for micro-service items. Provide configuration center to manage configuration uniformly. Research and development, products, testing, operation and maintenance, and even the boss can release their own applications.

5.2 deploy services on the Kepler platform

In order to reduce the cost of learning and the difficulty of deployment, it is easy to deploy applications on the Kepler platform, as long as an additional Dockerfile is needed.

Dockerfile reference:

The above is the normal mode, Jenkins code Build and Docker build.

This is a relatively free way of deployment, can be customized according to their own needs, of course, there is a learning cost.

5.2.1 Why not generate Dockerfile automatically?

In fact, it is possible to automatically generate Dockerfile, but the requirements of each service may be different, some need to add files, some need to add parameters in Build, and so on. We can't require all projects to be the same, which will hinder the development of technology. So the second best, we give the template, research and development according to their own needs.

5.3 tool Integration Kepler Cloud platform integrates API such as gitlab,Jenkins,repo,k8s,istio,promtheus,email,WeChat. Implement the management of the entire lifecycle of the service. Provide service management, creation, release, version, monitoring, alarm, log and some peripheral additional functions, message center, configuration center, login to containers, service offline, and so on. You can adjust the service mode, service type, one-click expansion and scaling, rollback service API management and storage management and other operations. 5.4 release proc

Users submit their Dockerfile and code to Gitlab, and then fill in some parameters on the Kepler cloud platform to create their own applications.

After the application is created, it creates a Job in Jenkins, pulls down the code and executes Docker build (go build or mvn will be executed first if no multi-level build is selected), then the packaged Docker image is pushed to the image warehouse, and finally, the platform API is called back or K8s is called to notify the latest version.

Users only need to manage their applications on the Kepler cloud platform, and all the rest are automated.

5.5 start by creating a service

Let's start by creating a service to introduce the platform.

Main interface of the platform:

Click "create Service" and go to the creation page.

Fill in the basic information:

Fill in the details:

For basic information, take Golang as an example, the parameters you need to fill in are slightly different when you choose another language.

If you choose to provide services, you will enter the third step, and the third step is to fill in the routing rules. If there are no special requirements, you can submit by default.

5.5.1 Service details

Build upgrade application version:

Invoke the service pattern, which can be adjusted between the normal and the service grid.

Ability of the service to provide external services:

Expand and adjust CPU and memory:

Adjust the number of Pod started:

The terminal of the web version:

5.5.2 scheduled tasks

5.5.3 persistent storage

The administrator creates StorageClass and PersistentVolumeClaim, and users only need to bind the relevant PVC selected by their own service.

NFS is used for storage.

5.5.4 Tracing

5.5.5 Consul

Consul is used as a configuration center, and we provide a client for Golang.

$go get github.com/lattecake/consul-kv-client

It will automatically synchronize the directory configuration of consul in memory, and you only need to get the configuration directly from memory.

5.5.6 Repository

Github: https://github.com/kplcloud/kplcloudDocument: https://docs.nsini.comDemo: https://kplcloud.nsini.com

Author: Wang Cong

Starter: good skills

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report