Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Step by Step! Kubernetes continuous deployment Guid

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

This paper is the workflow of continuous deployment of Kubernetes, which is summed up step by step from zero foundation through personal practice. The article starts from the preparation of the tools in the early stage, to the replication repository, testing, building the image, and finally to the deployment of the pipeline, all the workflows are shown in the article one by one, which will be of great reference to users who want to have a fully automatic and continuous delivery pipeline.

In a job a long time ago, my task was to switch the old LAMP stack to Kubernetes. At that time, my boss was always chasing new technologies and thought it would take only a few days to complete the iteration of old and new technologies-- since we didn't even know anything about how containers work at that time, we had to say that the boss's idea was really bold.

After reading the official documentation and searching for a lot of information, we began to feel overwhelmed-there were many new concepts to learn: pod, containers, replica, and so on. To me, Kubernetes seems to be designed for a group of smart developers.

Then I did what I often do in this situation: learn through practice. The intricate problems can be well understood through a simple example, so I completed the whole deployment process step by step.

In the end, we did it, though far from the required week-it took us nearly a month to create three clusters, including their development, testing, and production.

In this article, I will explain in detail how to deploy the application to Kubernetes. After reading this article, you will have an efficient workflow for Kubernetes deployment and continuous delivery.

Continuous integration and delivery

Continuous integration is the practice of building and testing each time the application is updated. With a small amount of work, errors are detected earlier and resolved immediately.

After the integration is complete and all the tests pass, we can add continuous delivery to the process of automated release and deployment. Projects that use CI/CD can be released more frequently and reliably.

We will use Semaphore, a fast, powerful, and easy-to-use continuous integration and delivery (CI/CD) platform that automates all processes:

1. Install project dependencies

2. Run the unit test

3. Build a Docker image

4. Push mirroring to Docker Hub

5. One-click Kubernetes deployment

For the application, we have a Ruby Sinatra microservice that exposes some HTTP endpoints. The project already contains everything you need for deployment, but still requires some components.

Preparatory work

Before you start, you need to log in to your Github and Semaphore accounts. In addition, to facilitate subsequent pull or push Docker images, you need to log in to Docker Hub.

Next, you need to install some tools on your computer:

Git: processing code

Curl: the Swiss Knife of the Network

Kubectl: remotely control your cluster

Of course, don't forget Kubernetes. Most cloud providers provide this service in various forms, just choose the one that suits your needs. The lowest machine configuration and cluster size are sufficient to run our example's app. I like to start with a 3-node cluster, but you can only use a 1-node cluster.

When the cluster is ready, download the kubeconfig file from your vendor. Some allow you to download directly from their web console, while others need helpers. We need this file to connect to the cluster.

With this, we can get started. The first thing to do is the fork repository.

Fork repository

Fork the demo application we will use in this article.

Access the semaphore-demo-ruby-kubernetes repository and click the Fork button at the top right

Click the Clone or download button and copy the address

Copy repository: $git clone https://github.com/your_repository_path...

Use Semaphore to connect to the new repository

Log in to your Semaphore

2. Click the link in the sidebar to create a new project

3. Click the [Add Repository] button next to your repository

Testing with Semaphore

Continuous integration makes testing interesting and efficient. A complete CI pipeline can create a quick feedback loop to detect errors before causing any damage. Our project comes with some off-the-shelf tests.

Open the initial pipeline file located in .semaphore / semaphore.yml and take a quick look. This pipeline describes all the steps that Semaphore should follow to build and test the application. It starts with the version and name.

Version: v1.0name: CI

Next is agent, the virtual machine that powers job. We can choose from three types:

Agent: machine: type: e1-standard-2 os_image: ubuntu1804

Block (blocks), tasks, and job define the actions to be performed in each step of the pipeline. While Semaphore,block runs sequentially, job in block runs in parallel. The pipeline contains two block, one for library installation and one for running tests.

The first block downloads and installs Ruby gems.

-name: Install dependencies task: jobs:-name: bundle install commands:-checkout-cache restore gems-$SEMAPHORE_GIT_BRANCH-$ (checksum Gemfile.lock), gems-$SEMAPHORE_GIT_BRANCH,gems-master-bundle install-- deployment-- path .bundle-cache store gems-$SEMAPHORE_GIT_BRANCH-$ (checksum Gemfile.lock) .bundle

Checkout copied the code in Github. Since each job runs on a completely isolated machine, we must rely on cache to store and retrieve files between job runs.

Blocks:-name: Install dependencies task: jobs:-name: bundle install commands:-checkout-cache restore gems-$SEMAPHORE_GIT_BRANCH-$ (checksum Gemfile.lock), gems-$SEMAPHORE_GIT_BRANCH,gems-master-bundle install-- deployment-- path .bundle-cache store gems-$SEMAPHORE_GIT_BRANCH-$ (checksum Gemfile.lock) .bundle

The second block is tested. Notice that we reuse the checkout and cache code to put the initial file into job. The last command is used to start the RSpec test suite.

-name: Tests task: jobs:-name: rspec commands:-checkout-cache restore gems-$SEMAPHORE_GIT_BRANCH-$ (checksum Gemfile.lock), gems-$SEMAPHORE_GIT_BRANCH,gems-master-bundle install-- deployment-- path .bundle-bundle exec rspec

In the last part, let's take a look at Promotion. Promotion can connect pipelines under certain conditions to create complex workflows. After all the job is complete, we use auto_promote_on to start the next pipeline.

Promotions:-name: Dockerize pipeline_file: docker-build.yml auto_promote_on:-result: passed

The workflow continues to the next pipeline.

Build a Docker image

We can run anything on Kubernetes as long as it is packaged in a Docker image. In this section, we will learn how to build an image.

Our Docker image will contain the application's code, Ruby, and all the libraries. Let's first take a look at Dockerfile:

FROM ruby:2.5RUN apt-get update-qq & & apt-get install-y build-essentialENV APP_HOME/ appRUN mkdir $APP_HOMEWORKDIR $APP_HOMEADD Gemfile* $APP_HOME/RUN bundle install-- without development testADD. APP_HOMEEXPOSE 4567CMD ["bundle", "exec", "rackup", "--host", "0.0.0.0", "- p", "4567"]

Dockerfile is like a detailed recipe that contains all the steps and commands needed to build a container image:

1. Start with the pre-built ruby image

2. Install the build tool using apt-get

3. Copy Gemfile because it has all the dependencies

4. Install them with bundle

5. Copy the source code of app

6. Define the listening port and launch command

We will bake our production image in a Semaphore environment. However, if you want to do a quick test on your computer, type:

$docker build. -t test-image

Use Docker to run and expose internal port 4567 to start the server locally:

$docker run-p 4567 4567 test-image

You can now test an available HTTP endpoint:

$curl-w "\ n" localhost:4567hello world:) add a Docker Hub account to Semaphore

Semaphore has a secure mechanism to store sensitive information, such as passwords, tokens, or keys. To be able to push image to your Docker Hub image repository, you need to create a Secret with your username and password:

Open your Semaphore

In the left navigation bar, click [Secret]

Click [Creat New Secret]

The name of Secret should be Dockerhub, type the login information (as shown in the following figure), and save it.

Build Docker pipeline

The pipeline starts building and the push is mirrored to Docker Hub, which has only one block and one job:

This time, we need to use better performance because Docker tends to be more resource-intensive. We chose e1-standard-4, the midrange machine with four CPU,8GB RAM and 35GB disk spaces:

Version: v1.0name: Docker buildagent: machine: type: e1-standard-4 os_image: ubuntu1804

The build block is started by logging in to Docker Hub, and the user name and password can be imported from the secret we just created. After logging in, Docker can directly access the image repository.

The next command is docker pull, which attempts to pull the latest image. If a mirror is found, Docker may be able to reuse some of these layers to speed up the build process. If you don't have the latest image, don't worry, it just takes a little longer to build.

Finally, we push the new mirror. Note that here we use the SEMAPHORE_WORKFLOW_ID variable to mark the image.

Blocks:-name: Build task: secrets:-name: dockerhub jobs:-name: Docker build commands:-echo "${DOCKER_PASSWORD}" | docker login-u "${DOCKER_USERNAME}"-- password-stdin-checkout-docker pull "${DOCKER_USERNAME}" / semaphore-demo-ruby-kubernetes:latest | | true-docker build-- cache -from "${DOCKER_USERNAME}" / semaphore-demo-ruby-kubernetes:latest-t "${DOCKER_USERNAME}" / semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID. -docker images-docker push "${DOCKER_USERNAME}" / semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID

When the image is ready, we enter the delivery phase of the project. We will extend our Semaphore pipeline with manual promotion.

Promotions:-name: Deploy to Kubernetes pipeline_file: deploy-k8s.yml

To do the first automatic build, push:

$touch test-build$ git add test-build$ git commit-m "initial run on Semaphore" $git push origin master

After the image preparation is complete, we can move on to the deployment phase.

Deploy to Kubernetes

Automatic deployment is one of Kubernetes's strengths. All we need to do is tell the cluster our final expected state, and it will be responsible for the rest.

However, you must upload the kubeconfig file to Semaphore before deployment.

Upload Kubeconfig to Semaphore

We need a second secret: the kubeconfig of the cluster. This file grants administrative access to it. Therefore, we do not want to check files into the repository.

Create a secret named do-k8s and upload the kubeconfig file to / home/semaphore/.kube/dok8s.yaml:

Deployment checklist

Although Kubernetes is already a container orchestration platform, we do not directly manage containers. In fact, the smallest unit to deploy is pod. A pod is like a group of inseparable friends, always going to the same place together. So make sure that the containers in pod are running on the same node and have the same IP. They can start and stop synchronously, and because they are running on the same machine, they can share resources.

The problem with pod is that they can be started and stopped at any time, and we have no way to determine the pod IP to which they will be assigned. To forward the user's http traffic, you also need to provide a public IP and a load balancer, which is responsible for tracking pod and forwarding client traffic.

Open the file located in deploymente.yml. This is a list of deploying our application, which is split into two resources by three dash. First, deploy resources:

ApiVersion: apps/v1kind: Deploymentmetadata: name: semaphore-demo-ruby-kubernetesspec: replicas: 1 selector: matchLabels: app: semaphore-demo-ruby-kubernetes template: metadata: labels: app: semaphore-demo-ruby-kubernetesspec: containers:-name: semaphore-demo-ruby-kubernetes image: $DOCKER_USERNAME/semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID

Here are a few concepts that need to be clarified:

Resources have a name and several tags to organize

Spec defines the final expected state, and template is the model used to create the Pod.

Replica sets the number of copies of the pod to be created. We often set it to the number of nodes in the cluster. Now that we are using three nodes, I will change this command line to replicas:3

The second resource is service. It binds to port 80 and forwards HTTP traffic to the pod in the deployment:

-apiVersion: v1kind: Servicemetadata: name: semaphore-demo-ruby-kubernetes-lbspec: selector: app: semaphore-demo-ruby-kubernetes type: LoadBalancer ports:-port: 80 targetPort: 4567

Kubernetes matches the selector to the tag to connect the service to the pod. Therefore, we have many services and deployments in the same cluster and connect them as needed.

Deployment pipeline

We are now entering the final phase of CI/CD configuration. At this point, we have a CI pipeline defined in semaphore.yml and a Docker pipeline defined in docker-build.yml. In this step, we will deploy to Kubernetes.

Open the deployment pipeline at .semaphore / deploy-k8s.yml:

Version: v1.0name: Deploy to Kubernetesagent: machine: type: e1-standard-2 os_image: ubuntu1804

Two job make up the final pipeline:

Job 1 begins deployment. After importing the kubeconfig file, envsubst replaces the placeholder variables in deployment.yaml with their actual values. Kubectl apply then sends the list to the cluster.

Blocks:-name: Deploy to Kubernetes task: secrets:-name: do-k8s-name: dockerhub env_vars:-name: KUBECONFIG value: / home/semaphore/.kube/dok8s.yaml jobs:-name: Deploy commands:-checkout-kubectl get nodes-kubectl get pods-envsubst < deployment.yml | Tee deployment.yml-kubectl apply-f deployment.yml

Job 2 marks the mirror up to date so that we can use it as a cache on our next run.

-name: Tag latest release task: secrets:-name: dockerhub jobs:-name: docker tag latest commands:-echo "${DOCKER_PASSWORD}" | docker login-u "${DOCKER_USERNAME}"-- password-stdin-docker pull "${DOCKER_USERNAME}" / semaphore-demo-ruby-kubernetes:$SEMAPHORE_WORKFLOW_ID-docker tag "${DOCKER_USERNAME}" / semaphore-demo-ruby-kubernetes: $SEMAPHORE_WORKFLOW_ID "${DOCKER_USERNAME}" / semaphore-demo-ruby-kubernetes:latest-docker push "${DOCKER_USERNAME}" / semaphore-demo-ruby-kubernetes:latest

This is the last step in the workflow.

Deploy the application

Let's teach our Sinatra app to sing. Add the following code to the App class in app.rb:

Get "/ sing" do "And now, the end is near And so I face the final curtain..." end

Push the modified file to Github:

$git add .semaphore / * $git add deployment.yml$ git add app.rb$ git commit-m "test deployment" $git push origin master

When the docker build pipeline is complete, you can check the progress of Semaphore:

It's time to deploy and click the Promote button to see if it works:

We're off to a good start, and now it's up to Kubernetes. We can use kubectl to check the deployment status, with an initial state of three required pod and zero availability:

$kubectl get deploymentsNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEsemaphore-demo-ruby-kubernetes 3 000 015m

A few seconds later, pod is started and reconciliation is complete:

$kubectl get deploymentsNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEsemaphore-demo-ruby-kubernetes 3 3 3 15m

Use get all to get the general status of the cluster, which shows pod, services, deployment, and replica:

$kubectl get allNAME READY STATUS RESTARTS AGEpod/semaphore-demo-ruby-kubernetes-7d985f8b7c-454dh 1 Running 0 2mpod/semaphore-demo-ruby-kubernetes-7d985f8b7c-4pdqp 1 + 1 Running 0 119spod/semaphore-demo-ruby-kubernetes-7d985f8b7c-9wsgk 1 + + 1 Running 0 2m34sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/kubernetes ClusterIP 10.12.0.1 443/TCP 24mservice/semaphore-demo-ruby-kubernetes-lb LoadBalancer 10.12.15.50 35.232.70.45 80:31354/TCP 17mNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.apps/semaphore-demo-ruby-kubernetes 3 3 3 17mNAME DESIRED CURRENT READY AGEreplicaset.apps/semaphore-demo-ruby-kubernetes-7d985f8b7c 3 3 3 2m3

Service IP is shown after pod. For me, the load balancer is assigned to the external IP 35.232.70.45. You need to change it to the one assigned to you by your provider, and then we'll try the new server.

$curl-w "\ n" http://YOUR_EXTERNAL_IP/sing

Now, it's not far from the end.

Victory is close at hand

When you use the right CI/CD solution, it's not that difficult to deploy to Kubernetes. You now have a fully automated continuous delivery pipeline for Kubernetes.

Here are a few suggestions for you to fork and play semaphore-demo-ruby-kubernetes on Kubernetes:

Create a staging cluster

Build a deployment container and run tests in it

Expand projects with more microservices

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report