Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy Spinnaker in a Kubernetes container environment?

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

If you follow Docker, you want to know about Kubernetes-based CD practices. In that case, Spinnaker may be the best choice for deployment tools for multi-cloud platforms. This article focuses on the concept of spinnaker, installation and trampling pits, the continuous deployment of spinnaker in kubernetes, as well as the selection of online container services and multi-zone disaster recovery.

1. About Spinnaker

What is Spinnaker? Let's first understand its concept.

Spinnaker is an open source project of Netflix and a continuous delivery platform that aims to quickly and continuously deploy products to a variety of cloud platforms. Spinnaker has two core functions: cluster management and deployment management. Spinnaker streamlines the deployment process by decoupling the release from various cloud platforms, thus reducing the complexity of platform migration or multi-cloud platform deployment applications. It internally supports cloud platforms such as Google, AWS EC2, Microsoft Azure, Kubernetes and OpenStack, and it can seamlessly integrate other continuous integration (CI) processes, such as git, Jenkins, Travis CI, Docker registry, cron scheduler, etc.

Spinnaker is mainly used to display and manage your cloud resources. Involving Applications, clusters, and server groups, etc., and

Show the service to the user through Load balancers and firewalls.

The official structure is as follows:

Application

Defines a set of cluster in the cluster, including firewalls and load balancers, and stores all deployment-related information about the service.

Server Group

Define some basic sources such as (VM image, Docker image), as well as some basic configuration information, once deployed, correspond to a collection of Kubernetes pods in the container.

Cluster

An associated collection of Server Groups. (different service groups can be created according to dev,prod in Cluster.) it can also be understood as a situation where there are multiple branches of an application.

Load Balancer

He does load balancing traffic between instances. You can also enable health checks for load balancers, flexibly define health criteria and specify health check endpoints, similar to ingress in kubernetes.

Firewall

Firewalls define network traffic access and define some access rules, such as security groups and so on.

The page is shown below, which is relatively concise. You can see the details of the project and application on its operation page, and you can also scale, roll back, modify and delete the cluster.

2. Deployment management function of spinnaker

In the figure above, the left side of infrastructure is the design of PIPELINE. It mainly talks about two parts: the creation of pipeline and the strategy of deployment.

Pipeline

1), strong ability of Pipeline: its Pipeline can be immeasurably complex, and it also has a strong expression function (the previous parameters in subsequent operations are obtained through expressions).

2), trigger method: scheduled task, manual trigger, jenkinsjob,docker images, or other pipeline steps.

3) Notification method: email, SMS or HipChat.

4) integrate all operations into pipeline, such as rollback, canary analysis, associated CI, and so on.

3. The pit stepped on in the installation of spinnaker

Many people feel that this is very difficult to install, in fact, the main reason is the wall, as long as it is solved, it will be much more convenient, the official documents are very detailed, and the spinnaker community is also very active, you can ask questions above.

The installation provides the means:

1), Halyard installation method (officially recommended installation method)

> > I use Halyard installation, because later we will integrate many other plug-ins, similar to gitlab,ldap,kayenta, or even multiple jenkins,kubernetes services, etc., with strong configurability

2) build Spinnaker platform with Helm

3), Development version installation

> > if you need to customize some personalized content in the helm method, it will be complicated and completely rely on the original image, while Development needs to be very familiar with spinnaker and the corresponding relationship between each version.

Points for attention in Halyard installation:

Configuration of Halyard Agent

`

Vim / opt/halyard/bin/halyard

DEFAULT_JVM_OPTS='S='-Dhttp.proxyHoxyHost=st=192.168.102.10-Dh 10-Dhttps.proxyPoxyPort=3128'

`

Deploy machine configuration

As spinnaker involves a large number of applications, it will be described separately below, which requires a large amount of memory. The official recommended configuration is as follows:

`

18 GB of RAM

A 4 core CPU

Ubuntu 14.04, 16.04 or 18.0

`

4. Spinnaker installation steps

1), Halyard download and installation

2) Select a cloud provider: I chose Kubernetes Provider V2 (Manifest Based), which requires K8S cluster authentication and rights management on the machine where spinnaker is deployed.

3), choose the installation environment when deploying: I chose the way of Debian package.

4) Select storage: minio is officially recommended, but I choose minio.

5) choose the version to install: my latest version at that time was V1.8.0

6) next, the deployment will take a long time for initial deployment, and the agent will be connected to download the corresponding package.

7) after all downloads and completion, check the corresponding log and access it using localhost:9000.

After completing the above steps, you can deploy the corresponding application on kubernetes.

The following figure shows the relationship between the various components of spinnaker.

Here is a look at the role of each component in spinnaker

Deck: user-oriented UI interface components, provides an intuitive and brief operation interface, and visualizes the operation, release and deployment process.

API: for invoking API components, we can avoid using the provided

UI, which directly calls the API operation, and the background helps us to perform tasks such as publishing.

Gate: the gateway component of API, which can be understood as a proxy, and all requests are forwarded by its agent.

Rosco: it is the component that builds the beta image, and you need to configure the Packer component to use it.

Orca: is the core process engine component used to manage processes.

Igor: is used to integrate other CI system components, such as Jenkins and so on.

Echo: is to notify the system component, send mail and other information.

Front50: is a storage management component, which needs to be configured for use by Redis, Cassandra and other components.

Cloud driver is the component that it uses to adapt to different cloud platforms, such as Kubernetes,Google, AWS EC2, Microsoft Azure, etc.

Fiat is a component of authentication. Configure permission management and support OAuth, SAML, LDAP, GitHub teams, Azure groups, Google Groups and so on.

5. Continuous deployment of spinnaker in kubernetes

A beautiful PIPELINE, deployment example.

The following pipeline design is developed to integrate the version into a branch, build through jenkins image, release the test environment, and then automate and manually verify whether it is necessary to manually determine whether it needs to be released online and rollback. If you choose to publish to the online, then release to the prod environment, so as to carry out prod automation CI. If you choose to roll back, roll back to the previous version, thus CI for dev automation.

We analyze each stage of the pipeline above:

Stage-configuration has the following configuration

Automated Triggers

Setting the way to trigger, defining global variables, and notifying the execution of the report are the starting point of pipeline.

Here we can specify a variety of trigger methods: scheduled tasks, corn,git,jenkins,docker Registry,travis,pipeline,webhook, etc.

Parameters

The global variables defined here will be obtained using ${parameters ['branch']} throughout the PIPELINE, which greatly simplifies the versatility of our pipeline design.

Notifications

Here is the way to notify support: SMS,EMAIL,HIPCHAT, etc.

After the operation is completed, it is shown below. We can view the relevant build information and the relevant information executed by this stage. Click build to jump to the job of the corresponding jenkins to view the relevant task information.

Stage-jenkins this is very powerful, perfect combination with jenkins

Call jenkins to perform related tasks. The service information of jenkins is stored in the configuration file of hal (shown below). Spinnaker can automatically synchronize the job and parameters of jenkins. After running, you can see the corresponding JOB ID and status:

Stage-deploy

Because we use Kubernetes Provider V2 mode when we configure spinnaker, our release is realized by yaml, and the files can be stored in github or configured directly on the page. At the same time, the files in yaml support a lot of parameterization, which greatly facilitates our daily use.

Configuration information for halyard associated KUBERNETES

Since the cloud service we use is KUBERNETES, you need to authenticate the KUBERNETES cluster with the machine deploying spinnaker when configuring it.

Spinnaker release message display:

Here Manifest Source supports parameterization, similar to the yaml deployment file we wrote, but here it supports parameterization.

Stage-Webhook

Webhook

We can do some simple environment verification and the function of invoking other services, and it also provides some result verification functions to support a variety of requests.

Stage-Manual Judgment

For manual logical judgment, increase the control of pipeline (for example, tester certification and leadership approval are required to publish online), and the content supports multiple grammatical expressions.

Stage-Check Preconditions

The above Manual Judgment Stage is configured with two Judgment Inputs judgment items, and then we build two Check Preconditions Stage to do conditional test on these two judgment items respectively. If the condition test is successful, the corresponding subsequent Stage process is executed. For example, in the above operation, if you choose to publish to prod, execute the branch published to the line, and if you choose to perform the rollback operation, roll back the related branches.

If I select rollback in the figure above, the prod branch will fail, which will block the subsequent stage operation.

Stage-Undo Rollout (Manifest)

If we find a problem with the release, we can also design the rollback of the rolled-back stage,spinnaker to be extremely convenient. In our daily deployment, there will be a corresponding deployment record for each version.

In the pipeline description of the rollback, we need to select the rollback information of the corresponding deployment and the number of versions to be rolled back.

Example of run result

Stage-Canary Analysis

Canary deployment method: when we release the new version, we introduce part of the traffic into the canary service. Kayenta will read the metrics collected in the prometheus configured in spinnaker, such as memory, CPU, interface timeout, failure rate, and so on. Analyze and determine the NetflixACAJudge in kayenta, store the analysis results in S3, and finally give the final result of this period.

6. Design and display of operation results

1) We need to enable the configuration of canary for the application.

2). The deployment that creates baseline and canary points to the two deployment from the same service.

3) We use the index of reading prometheus here, and we need to add prometheus configuration to hal. Metric can directly match the metrics of prometheus.

Execution record of each analysis:

The results are shown below, and since we set the goal of 75, the result of the pipeline is determined to be a failure.

7. Selection of online container services and multi-area disaster recovery

The adoption of cloud container service mainly focuses on the following aspects:

1. Kubernetes in the self-built cluster, to implement the Overlay network, in a good cloud platform environment, it itself is a software-defined network VPC, so its implementation on the network can be as fast as the native VM network in the container environment without any performance sacrifice.

2. The application-based load balancer is associated with the ingress in Kubernetes, which can quickly create services that require external access.

3. Cloud storage can be managed by Kubernetes to facilitate persistence operation.

4. Cloud deployment and alarm also provide services and interfaces to better view the situation of node and pod related to monitoring.

5. Cloud log service is well integrated with containers, and can easily collect and retrieve logs.

In the case shared in this article, two clusters in Beijing Zone 2 and Beijing Zone 3 are used for disaster recovery. if grayscale verification is needed, the weight of online Beijing Zone 3 is modified to 0, so that the new version of the application can be reached through the grayscale load balancer. In daily use, Zone 2 and Zone 3 provide hanging service at the same time. The ideas and models of the design are as follows:

Pipeline execution result display

Display of a specific application:

Display of a specific application:

8. Related problems

Q: why isn't it combined with CI? What are the advantages of using this heavier spannaker?

A: it can be combined with CI, such as webhook, or jenkins scheduling.

The advantage is that it can be combined with many cloud platforms not only to support containers, but also to have a perfect pipeline with a high degree of parameterization.

How to control the security aspects of Q:spannaker?

Fiat in A:spinnaker is a component of authentication. Configure permission management, such as Auth, SAML, LDAP, GitHub teams, Azure groups, Google Groups, we will use ldap, and the corresponding login personnel will be displayed above after login.

Can Q:spinnaker get a deployment yaml to deploy through http api in pipeline? this yaml may be generated dynamically.

A: there are two ways to deploy a service:

1) enter Manifest Source directly into the UI of spinnaker, which is actually the corresponding deployment YAML, except that the parameters of pipeline can be written here.

2) you can pull the corresponding file from the github to deploy, as long as the corresponding git address and project are configured.

Q: currently, IaaS only supports openstack and foreign public cloud vendors. What do domestic cloud service providers need to do if they want to support them (manage CVMs instead of containers)? Is it easy to do it yourself? How do we start?

A: at present, we mainly use spinnaker to manage containers, which is not very good for domestic cloud vendors' spinnaker support. For example, lb and security policies cannot be controlled on spinnaker. If you need to study, you can check the function of this component Cloud driver.

Q: personally, I still need to combine Jenkins to complete build and image repository etc. Spinnaker just completes the later delivery, right?

A: yes, spinnaker is a continuous delivery tool, and we also use jenkins to create the image and push it to the warehouse.

Q: the previous screenshot of spannaker shows that there are also features developed for some users. Do you still need Istio after using spannaker?

A: these two have different functions. [functions developed for some users] this is achieved by creating different service and ingress. Its routing capability is certainly not as powerful as istio, and it does not have the technology of circuit breaker. We create this offline mainly to facilitate developers for rapid deployment and debugging.

Q: can deploy and test and rollback be integrated with helm chart?

A: I think so. Stupid methods can eventually be implemented with the help of jenkins, but the rollback and deployment technology of spinnaker is so powerful that you can quickly roll back and deploy the version with a single click on the page.

Q: why did the outermost layer choose a nginx cluster instead of a cloud load balancer?

A: the entrance to our traffic is a cloud load balancer, but then we have some strategies to implement in NGINX.

(this article is based on the sharing of Ji Banghua community.)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report