In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Author | Wu Chongbin VPGAME Operation and maintenance Development engineer
Introduction: VPGAME is a comprehensive e-sports service platform integrating event operation, media information, big data analysis, player community, periphery and so on. Headquartered in Hangzhou, China, e-sports big data R & D Center and AI R & D Center have been set up in Shanghai and Seattle, USA respectively. This article describes the process of VPGAME migrating servers to Kubernetes.
Background
With the increasing maturity of container technology, the company plans to migrate its services to the container environment in the near future, and schedule, orchestrate and manage containers through Kubernetes. And take this opportunity to standardize services, optimize the entire CI/CD process, and improve the efficiency of service deployment.
Selection of CI/CD tools
On the CI/CD tool, we chose GitLab-CI. GitLab-CI is a continuous integration system used with GitLab to complete installation dependencies, compilation, unit testing, lint, image construction, and release after code submission.
GitLab-CI is perfectly integrated with GitLab, and you only need to install and configure gitlab-runner when using it. After completing the registration with GitLab, GitLab-Runner can provide an environment for CI/CD operation, which is responsible for pulling the code from GitLab and executing corresponding commands to carry out CI/CD work according to the gitlab-ci.yml configured in the code warehouse.
Compared with Jenkins,GitLab-CI configuration, you only need to configure the gitlab-ci.yml file in the project to complete the CI/CD process. You do not need to configure the webhook callback address as in Jenkins, nor do you need Jenkins to create a new compilation configuration for this project. And I personally think that the CI/CD process of GitLab is more beautiful than Jenkins. Of course, with its rich plug-ins, Jenkins can configure a lot of features that GitLab-CI does not exist. According to our current requirements, GitLab-CI is easy to use and meets our needs in terms of functionality.
Service runtime environment container environment advantages
The traditional way of service deployment is to install the corresponding application dependencies in the operating system, and then install the application services. the disadvantage of this deployment method is that the program, configuration, dependent library and life cycle of the service are closely coupled with the host operating system, and it is not very convenient for service upgrade, scaling, migration and other operations.
The deployment mode of the container is to take the image as the core. When the code is compiled and built, the application and the dependencies needed to run the application are packaged into an image. In the deployment phase, the service is deployed and run by creating container instances through the image. Thus, the application-centered management is realized, and the isolation of the container realizes the isolation of resources. because the container does not need to rely on the operating system environment of the host, it can ensure the consistency of the development, testing and production environment. In addition, because the built images are immutable and can be versioned through tag, you can provide reliable and frequent container image construction and deployment, as well as convenient and fast rollback operations.
Kubernetes platform function
Kubernetes (k8s for short), as a container scheduling, orchestration and management platform, can schedule and run application containers on physical or virtual machine clusters, providing an infrastructure with containers as the core. Through Kubernetes, you can orchestrate and manage containers:
Rapid and predictable deployment of services has the ability to instantly expand services with rolling upgrades, complete the release of new features, optimize hardware resources, and reduce the cost of Ali Cloud container services.
We choose Aliyun's container service in service migration, which is based on native Kubernetes for adaptation and enhancement, simplifies cluster construction and expansion, integrates Aliyun's virtualization, storage, network and security capabilities, and creates the best Kubernetes containerized application running environment in the cloud. In terms of convenience, you can create and upgrade Kubernetes clusters and expand the capacity of nodes with one click through the Web interface. Functionally, it is integrated with Ali Cloud resources in terms of network, storage, load balancing and monitoring to minimize the impact of migration during the migration process.
In addition, when we choose to create the cluster, we choose the hosted Kubernetes. We only need to create the Worker node, and the Master node is created and hosted by the container service. In this way, we still have autonomy and flexibility in the planning and resource isolation of Worker nodes, while there is no need for operators to manage Kubernetes cluster Master nodes, so we can pay more attention to application services.
GitLab Runner deployment GitLab CI Workflow
Cdn.com/c80d3b069a007de5861693ed5a8a7e70f39fffc4.png ">
Basic concepts of GitLab CI
Before introducing GitLab CI, let's briefly introduce some basic concepts in GitLab CI, as follows:
In the pipeline in Pipeline:Gitlab CI, each code submission triggers GitLab CI to generate a Pipeline;Stage: each Pipeline consists of multiple Stage, and each Stage has a sequence; the smallest task unit in Job:GitLab CI is responsible for doing one thing, such as compiling, testing, building images, etc. Each Job needs to specify Stage, so the execution order of Job can be achieved by formulating a different Stage; GitLab Runner: is the specific environment for implementing Job, and each Runner can only execute one Job;Executor at a time: each Runner needs to specify Executor when registering with GitLab to decide what type of executor to complete Job. The workflow of GitLab CI
When there is code push to GitLab, a Pipeline is triggered. Then compile, test, image build and other operations are performed, each of which is a Job. In the CD phase, the results built in the CI phase are deployed to the test or production environment as appropriate.
GitLab Runner introduces Gitlab Runner classification
There are three types of Runner in GitLab, which are:
Shared: all projects use group:group projects use specific: specify project use
We can register different types of Runner with GitLab as needed, in the same way.
Gitlab Runner working process
Runner first initiates a registration request to GitLab, which contains token, tag and other information. After successful registration, GitLab will return a token to Runner. For subsequent requests, Runner will carry this request.
After the registration is successful, Runner will keep asking GitLab for Job with an interval of 3 seconds. Return 204No Content if there is no request to Job,GitLab. If the request to the Job,GitLab returns the Job information, after receiving the Job, the Runner sends an acknowledgement request to the GitLab and updates the status of the task. After that, the Runner starts the execution of the Job and periodically sends the intermediate data to GitLab in the form of an Patch request.
Executor of GitLab Runner
When Runner actually executes Job, it is done by calling Executor. Runner provides different types of Executor such as SSH, Shell, Docker, docker-ssh, VirtualBox, Kubernetes and so on when registering to meet different scenarios and requirements.
Among them, the Executor,Shell types that we commonly use, such as Shell and Docker, mainly use the environment of the host where Runner is located to execute Job. For Executor of Docker type, at the beginning of each Job, pull the image to produce a container, complete the Job in the container, and the corresponding container will be destroyed after the Job is completed. Because Docker is isolated, lightweight and recycled, we choose Executor of Docker type to execute Job. As long as we make the Docker image of the environment required by Job in advance, and define the image in each Job, the corresponding environment can be used, and the operation is convenient.
GitLab Runner installation and configuration Docker installation
Since we need to use Executor of type Docker, we need to install Docker on the server running Runnner first. The specific steps are as follows (CentOS environment):
To install the required software packages, yum-util provides yum-config-manager functionality, and the other two are DeviceMapper driver dependencies:
Yum install-y yum-utils device-mapper-persistent-data lvm2
Set up the yum source:
Yum-config-manager-- add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker:
Yum install docker-ce-y
Boot and join boot boot:
Systemctl start dockersystemctl enable dockerGitlab runner installation and startup
Execute the following command to install and start GitLab Runner:
Curl-L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh | sudo bashsudo yum install gitlab-runner-ygitlab-runner startGitLab Runner registration and configuration update
After starting GitLab Runner, you also need to register with GitLab, and you need to query token from GitLab before registering. Different types of Runner correspond to different token acquisition paths. Shared Runner requires an account with admin permission, and the corresponding token can be obtained as follows.
The other two types can get token under the corresponding page (group or project home page) under setting- > CI/CD- > Runner. There are two ways to register Runner: interactive and non-interactive. In the interactive registration mode, after entering the gitlab-runner register command, follow the prompts to enter the information needed for registration, including gitlab url, token and Runner names and so on. Individuals here recommend non-interactive commands, which can be prepared in advance to complete one-click registration, and non-interactive registration provides more registration options to meet more diverse needs.
Follow the example below to complete the registration of a Runner:
Gitlab-runner register-- non-interactive\-- url "http://git.xxxx.cn""-- registration-token "xxxxxxxxxxx"\-- executor "docker"\-- docker-image alpine:latest\-- description "base-runner-docker"\-- tag-list "base-runner"\-- run-untagged= "true"\-docker-privileged= "true"\-docker-pull-policy "if-not-present"\-- Docker-volumes / etc/docker/daemon.json:/etc/docker/daemon.json\-docker-volumes / etc/gitlab-runner/key/docker-config.json:/root/.docker/config.json\-docker-volumes / etc/gitlab-runner/find_diff_files:/usr/bin/find_diff_files\-docker-volumes / etc/gitlab-runner/key/id_rsa:/root/.ssh/id_rsa\-docker-volumes / etc/gitlab-runner / key/test-kube-config:/root/.kube/config
We can use-- docker-pull-policy to specify the Dokcer image download policy when Executor executes Job. -- docker-volumes specifies the file mount mapping relationship between the container and the host (that is, the server on which Runner is running). The files mounted above are mainly used for some of the key used by Runner when executing Job, including key that accesses GitLab, Docker Harbor, and Kubernetes clusters. Of course, if there are other files that need to be shared with the container, you can specify them with-- docker-volumes.
The / etc/docker/daemon.json file is mainly set up for http access to docker horbor:
{"insecure-registries": ["http://docker.vpgame.cn"]}"
After completing the registration, restart Runner:
Gitlab-runner restart
After the deployment is complete, the information of different Runner can be found in the Web management interface of GitLab.
In addition, if a service needs to register multiple Runner, you can change the concurrent value in / etc/gitlab-runner/config.toml to increase the number of concurrency of Runner. After modification, you also need to restart Runner.
Creation of Docker basic Image
In order to meet the diverse needs of different services for the running environment, we need to prepare different basic images in advance for services in different languages for use in the construction image phase. In addition, the tool image required by CI/CD also needs to be made as the Docker image needed by the container when Runner executes Job. All images are managed by GitLab in the form of writing Dockerfile, and we have written A. gitlab-ci.yml file, so that every time there is a new or modified Dockerfile, it will trigger Pipeline to build the image and upload it to Harbor. This kind of management has the following advantages:
Automatically build an image according to certain rules, and you can quickly and easily create and update images. According to the rules, you can find the Dockerfile corresponding to the image. Team members who make it clear that the specific composition of the image can freely build the image classification they need by submitting the Merge Request.
Runtime basic image: provides the tools necessary for each language runtime and the corresponding package. CI image: based on the runtime basic image, unit testing, lint, static analysis and other functions are added to the test part of the CI/CD process. Packaging and launching images: used in build and deploy in the CI/CD process. Dockerfile directory structure
Each folder has Dockerfile to describe the basics of images, including runtime base images and CI images in different languages such as Java, PHP, Node, and Go, as well as Dockerfile for tools such as docker-kubectl.
Take PHP image as an example:
Php/ ├── 1.0 │ ├── Dockerfile │ ├── ci-1.0 │ │ └── Dockerfile │ ├── php.ini │ ├── read-zk-config │ ├── start_service.sh │ └── www.conf └── nginx ├── Dockerfile ├── api.vpgame.com.conf nginx.conf
There is a folder named 1. 0 under this directory where a Dockerfile is used to build the php fpm runtime base for mirroring. It is mainly in php:7.1.16-fpm-alpine3.4 that we add our own customized files and specify the working directory and container initial commands.
FROM php:7.1.16-fpm-alpine3.4RUN sed-I's bank dlmurcdn.alpinelinux.orgUniqqqqors.aliyun.com @ g' / etc/apk/repositories\ & & apk upgrade-- update & & apk add-- no-cache-- virtual build-dependencies $PHPIZE_DEPS\ tzdata postgresql-dev libxml2-dev libmcrypt libmcrypt-dev libmemcached-dev cyrus-sasl-dev autoconf\ & & apk add-- no-cache freetype libpng libjpeg-turbo freetype-dev libpng-dev libjpeg- Turbo-dev libmemcached-dev\ & & cp / usr/share/zoneinfo/Asia/Shanghai / etc/localtime\ & & echo "Asia/Shanghai" > / etc/timezone\ & & docker-php-ext-configure gd\-with-gd\-- with-freetype-dir=/usr/include/\-- with-png-dir=/usr/include/\-- with-jpeg-dir=/usr/include/\ & & docker-php-ext-install gd pdo pdo_mysql bcmath opcache\ & & pecl install memcached apcu redis\ & & docker-php-ext-enable memcached apcu redis\ & & apk del build-dependencies\ & & apk del tzdata\ & & rm-rf / var/cache/apk/*\ & rm-rf / tmp/*\ & & rm-rf / working/*\ & & rm-rf / usr/local/etc/php -fpm.d/*COPY start_service.sh / usr/local/bin/start_service.shCOPY read-zk-config / usr/local/bin/read-zk-configCOPY php.ini / usr/local/etc/php/php.iniCOPY www.conf / usr/local/etc/php-fpm.d/www.confWORKDIR / workCMD ["start_service.sh"]
There is also a Dockerfile in 1.0/ci-1.0, which is used to build the CI image that PHP uses for unit testing and lint operations. You can see that it builds on the base runtime image above by adding other tools.
FROM docker.vpgame.cn/infra/php-1.0ENV PATH= "/ root/.composer/vendor/bin:$ {PATH}" ENV COMPOSER_ALLOW_SUPERUSER=1RUN mkdir-p / etc/ssh & & echo "StrictHostKeyChecking no" > > / etc/ssh/ssh_configRUN apk-- update add-- no-cache make libc-dev autoconf gcc openssh-client git bash & &\ echo "apc.enable_cli=1" > > / usr/local/etc/php/conf.d/docker-php-ext-apcu.iniRUN pecl install Xdebug & docker-php-ext-enable xdebug & &\ echo-e "\ nzend_extension=xdebug.so" > > / usr/local/etc/php/php.iniRUN wget https://vp-infra.oss-cn-beijing.aliyuncs.com/gitlab-ci/software/download/1.6.5/composer.phar-O / bin/composer & &\ chmod + x / bin/composer & &\ composer config-g-Q repo.packagist composer https://packagist.laravel-china. OrgRUN composer global require-Q phpunit/phpunit:~5.0 squizlabs/php_codesniffer:~3.0WORKDIR / CMD ["/ bin/bash"]
In addition, Dockerfile is also available in the Nginx directory to customize the Nginx image needed by our PHP project.
When you add a new Dockerfile or change a Dockerfile in GitLab for the first time, it will trigger Pipeline to automatically build the image and upload it to our private Docker Harbor.
Basic principles of automatic Construction of Image
Since each image is managed through Dockerfile, the Master branch has a new merge. You can use the git diff command to find out the new or updated Dockerfile before and after the merger, and then build the image according to these Dockerfile according to certain naming rules, and upload it to Docker Harbor.
For FILE in `bash. / find_diff_files | grep Dockerfile | sort`; do DIR= `dirname "$FILE" `; IMAGE_NAME= `echo $DIR | sed-e's /\ / /-/ g '`; echo $CI_REGISTRY/$HARBOR_DIR/$IMAGE_NAME; docker build-t $CI_REGISTRY/$HARBOR_DIR/$IMAGE_NAME-f $FILE $DIR; docker push $CI_REGISTRY/$HARBOR_DIR/$IMAGE_NAME; done
Finddifffiles in the above command finds the files that are different before and after the merge based on the git diff command.
Accelerated tipsAlpine Linux Package Management (APK) image address: http://mirrors.aliyun.com; some overseas software downloads will be slow, so you can download them first and upload them to Aliyun OSS before downloading. Dockerfile uses Ali Cloud OSS as the download source to reduce the time it takes to build images. CI/CD process based on. Gitlab-ci.yml
After completing the production of the GitLab Runner and Docker basic images, we can proceed with the CI/CD process to complete the unit testing, lint, compilation, image packaging and deployment after the code update. The operation of CI/CD through GitLab CI only needs to edit and maintain a. Gitlab-ci.yml file in the code repository. Whenever the code is updated, GitLab CI will read the contents of. Gitlab-ci.yml and generate a Pipeline for CI/CD operation.
The syntax of .gitlab-ci.yml is relatively simple, and the description of Job is based on yaml syntax. We divide the tasks that need to be completed in the CI/CD process into the Job in the file, and as long as we complete a clear definition of each Job, we can form a set of appropriate, efficient and universal CI/CD process.
Define stages
Stages is a very important concept, which is defined globally in A. gitlab-ci.yml, and the value is specified when defining the Job to indicate the stage in which the Job is located. The order of elements in the stages defines the order in which the Job is executed: all Job in the same stage will be executed in parallel, and only after all the successful completion of the current stage will the Job of the subsequent stage execute.
For example, define the following stages:
Stages:-build-test-deploy first, all Job in build will be executed in parallel; when all Job in build is executed successfully, all Job in test will be executed in parallel; if all Job in test is executed successfully, all Job in deploy will be executed in parallel; if all Job in deploy is executed successfully, the current Pipeline will be marked passed; when Job execution of a stage fails, Pipeline will be marked as failed, and Job of its subsequent stage will not be executed. Description of Job
Job is the most important part of the. gitlab-ci.yml file, and all tasks performed in the CI/CD process can be implemented by defining the Job Specifically, we can describe each Job by keywords. As there are many keywords in Job, and the usage is relatively rich, here is an explanation for a Job in our actual combat.
Unittest: stage: test image: docker.vpgame.cn/infra/php-1.0-ci-1.1 services:-name: docker.vpgame.cn/infra/mysql-5.6-multi alias: mysql- name: redis:4.0 alias: redis_default script:-mv .env.tp .env-composer install-- no-dev-phpunit-v-- coverage-text-- colors=never-- coverage-html=coverage-- stderr artifacts: when: on_success paths: -vendor/-coverage/ expire_in: 1 hour coverage:'/ ^\ s*Lines:\ s *\ dbath.\ d+\% / 'only:-branches-tags tags:-base-runner
The above Job mainly completes the function of unit testing, defining the name of the Job in the starting line. Let's explain the specific meaning of each keyword in Job.
Stage, which defines the stage in which the Job resides. The value is defined in the global stages.
Image, which specifies the image needed for Runner to run. This image is the basic image we made before. The Docker running through the image is the environment in which Job runs.
The connection required by the Docker that services,Runner runs depends on. Here, MySQL and Redis are defined respectively. When the Job is running, it will connect the Docker generated by these two images.
The specific commands that script,Job runs are described by Shell. The script in this Job mainly completes the code compilation and unit testing.
Artifacts mainly packages and saves the results completed in this Job. You can specify when to save through when. Path defines the file path to be saved, and expire_in specifies the validity period of the result. Corresponding to this is the dependencies parameter. If other Job needs the artifacts of this Job, you only need to define it as follows in Job; dependencies:-unittest
The only keyword specifies the time when the Job is triggered. In this example, it is stated that the Job will be triggered only if the branch merges or hits tag.
As opposed to only, there is also an except keyword to exclude certain situations that trigger Job. In addition, only supports regular expressions, such as: job: only:-/ ^ issue-.*$/ except:-branches
In this example, only the tag tag that starts with issue- triggers Job. If the except parameter is not added, branches or tag tags that begin with issue- will trigger Job.
The tags,tags keyword is mainly used to specify the type of Runner to run. In our practical application, the Runner used in deploying the test environment and the production environment is different, and they are identified and distinguished by different tag.
So, in the Job definition, we specify the desired Runner by specifying the value of Runner through tags.
We can see that the definition of Job is very clear and flexible, and the use of Job is much more than these functions. For more detailed usage, please refer to the official GitLab CI/CD documentation.
CI/CD process choreography
After figuring out how to describe a Job, we form a Pipelines by defining each Job and orchestrating it. Because we can describe the trigger conditions for setting Job, different Pipelines can be triggered by different conditions. In the process of launching the PHP project Kubernetes, we stipulated that the merger of Master branches will carry out four Job of lint, unitest, build-test and deploy-test.
After the test environment has been verified, we will launch the formal environment by typing tag. The Pipelines here contains three Job: unittest, build-pro, and deploy-pro.
In the. gitlab-ci.yml file, the test phase mainly completes two Job, lint and unitest, and different languages will have different definitions of Job. Let's focus on the Job descriptions of build and deploy stage. Build stage:
# Build stage.build-op: stage: build dependencies:-unittest image: docker.vpgame.cn/infra/docker-kubectl-1.0 services:-name: docker:dind entrypoint: ["dockerd-entrypoint.sh"] script:-echo "Image name:" ${DOCKER_IMAGE_NAME}-docker build- t ${DOCKER_IMAGE_NAME}. -docker push ${DOCKER_IMAGE_NAME} tags:-base-runnerbuild-test: extends: .build-op variables: DOCKER_IMAGE_NAME: ${DOCKER_REGISTRY_PREFIX} / ${CI_PROJECT_PATH}: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA} only:-/ ^ testing/-masterbuild-prod: extends: .build-op variables: DOCKER_IMAGE_NAME: ${DOCKER_REGISTRY_PREFIX} / ${ CI_PROJECT_PATH}: ${CI_COMMIT_TAG} only:-tags
On this side, the basic operation of image packaging in test environment and production environment in build phase is the same, and the image build and image repository are uploaded according to Dockerfile. An extend parameter is used here, which can reduce the repetitive Job description and make the description more concise and clear.
Let's first define a Job for A. build-op, and then both build-test and build-prod inherit through extend, and you can add or override the configuration in the. build-op by defining keywords. For example, build-prod redefines the variable (variables) DOCKER_IMAGE_NAME and the trigger condition (only) is changed to hit tag.
What we also need to notice here is that when defining DOCKER_IMAGE_NAME, we refer to some variables of GitLab CI itself, such as the tag name of CI_COMMIT_TAG that represents the commit of the project. When we define Job variables, we may refer to some GitLab CI itself variables. For instructions on these variables, please refer to the Chinese documentation of GitLab CI/CD Variables.
Deploy stage:
# Deploy stage.deploy-op: stage: deploy image: docker.vpgame.cn/infra/docker-kubectl-1.0 script:-echo "Image name:" ${DOCKER_IMAGE_NAME}-echo ${APP_NAME}-sed-I "sworn applications for namespace applications ${NAMESPACE} ~ g" deployment.yml service.yml-sed-I "sworn applications for naming applications ${APP_NAME} ~ g" deployment.yml service.yml-sed-I "s~__PROJECT_NAME _ _ ~ ${CI_PROJECT_NAME} ~ g "deployment.yml-sed-I" swords completed program naming procedures ${CI_PROJECT_NAMESPACE} ~ g "deployment.yml-sed-I" slots packing group names won't cost ${GROUP_NAME} ~ g "deployment.yml-sed-I" slots installed VERSIONION problems ${VERSION} ~ g "deployment.yml-sed-I" slots response response deployment.yml ${REPLICAS} ~ g "deployment.yml-kubectl apply-f deployment.yml Kubectl apply-f service.yml-kubectl rollout status-f deployment.yml-kubectl get all Ing-l app=$ {APP_NAME}-n $NAMESPACE# Deploy test environmentdeploy-test: variables: REPLICAS: 2 VERSION: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA} extends: .deploy-op environment: name: test url: http://example.com only:-/ ^ testing/-master tags:-base-runner# Deploy prod environmentdeploy-prod: variables: REPLICAS: 3 VERSION: ${CI_COMMIT_TAG } extends: .deploy-op environment: name: prod url: http://example.com only:-tags tags:-pro-deploy
Similar to the build phase, you first define a Job of A. deploy-op, and then both deploy-test and deploy-prod inherit through extend.
Deploy-op mainly replaces some variables of Kubernetes Deployment and Service template files, and deploys Kubernetes services according to the generated Deployment and Service files.
The two Job of deploy-test and deploy-prod define different variables (variables) and trigger conditions (only). In addition, deploy-prod uses the tags keyword to use different Runner, pointing the target cluster of deployment to the Kubernetes of the production environment.
There is also a keyword environment that needs to be specified, and after defining environment, we can view some information about each deployment in GitLab. In addition to looking at some information about each deployment, we can easily redeploy and roll back.
As you can see, by configuring the keywords of Job, we can flexibly orchestrate the CI/CD process we need to meet a variety of scenarios.
Deployment and Service configuration
After completing the packaging task of the Docker image in the CI/CD process, you need to deploy the image corresponding to the service to the Kubernetes cluster. Kubernetes provides a variety of resource objects that can be scheduled. First, let's take a brief look at some of the basic resources in Kubernetes.
Kubernetes basic Resource object Overview Pod
Pod, as the running entity of stateless application, is one of the most commonly used resource objects. Kubernetes is the smallest basic unit of resource scheduling, which contains one or more closely related containers. These containers share storage, network, and namespaces, as well as specifications for how they work.
In Kubernetes, Pod is not persistent and will be destroyed and rebuilt because of node failure or network failure. Therefore, we generally do not create a separate Pod directly in Kubernetes, but provide services through multiple Pod.
ReplicaSet
ReplicaSet is a replica controller in Kubernetes, which controls the Pod managed by it, so that the number of Pod copies is maintained at a preset number. ReplicaSets can be used independently, but in most scenarios it is used by Deployments as a mechanism to coordinate Pod creation, deletion and update.
Deployment
Deployment provides a declarative definition method for Pod and ReplicaSet. By describing the target state in Deployment, Deployment controller will change the actual state of Pod and ReplicaSet to the set target state. Typical Deployment application scenarios include:
Define Deployment to create Pod and ReplicaSet rolling upgrade and rollback application expansion and reduction pause and resume DeploymentService
In Kubernetes, Pod can be created or destroyed at any time. Each Pod has its own IP, and these IP cannot persist, so Service is required to provide service discovery and load balancing capabilities.
Service is an abstraction that defines a set of Pod policies and determines the Pod accessed by the back end through Label Selector, thus providing an entry for the client to access the service. Each Service corresponds to a ClusterIP within the cluster, and a service can be accessed through ClusterIP within the cluster. If you need to provide services outside the cluster, you can use NodePort or LoadBalancer.
Deployment.yml configuration
The deployment.yml file is used to define Deployment. First familiarize yourself with the Deployment configuration format through a simple deployment.yml configuration file.
In the figure above, the deployment.yml is divided into eight parts, which are as follows:
ApiVersion is the version of the current configuration format; kind specifies the resource type. Here, of course, Deployment;metadata is the metadata of the resource, where name is a required data item, and you can also specify label to tag the resource; spec is the specification of the Deployment; spec.replicas specifies the number of copies of Pod; spec.template defines the basic information of Pod, which is specified by spec.template.metadata and spec.template.spec; and spec.template.metadata defines the metadata of Pod. At least one Pod used by label for Service recognition and forwarding needs to be defined. Label is specified in the form of key-value. Spec.template.spec describes the specification of Pod. This section defines the attributes of each container in Pod. Name and image are required.
In practical applications, there are more flexible and personalized configurations. We have formulated the relevant specifications in the deployment practice of Kubernetes, configured on the above infrastructure, and obtained the deployment.yml configuration file that meets our actual needs.
In the migration practice of Kubernetes, we have standardized the configuration of Deployment in the following aspects:
File templating
First, our deployment.yml configuration file is a template file with variables, as shown below:
ApiVersion: apps/v1beta2kind: Deploymentmetadata: labels: app: _ _ APP_NAME__ group: _ _ GROUP_NAME__ name: _ _ APP_NAME__ namespace: _ _ NAMESPACE__
Variables in the form of APPNAME, GROUPNAME and NAMESPACE are all replaced by variables corresponding to each project of GitLab in the CI/CD process, in order to use the same deployment.yml file for more project, so that it can be quickly copied during Kubernetes migration and improve efficiency.
Service name
The name of Service and Deployment running in Kubernetes consists of groupname and projectname in GitLab, that is, {{groupname}}-{{projectname}}, for example: microservice-common; is marked as app_name as the unique identity of each service in Kubernetes. These variables can be obtained from GitLab-CI 's built-in variables without the need for a special configuration for each project.
The tag used to identify the service in Lables is consistent with the Deployment name and is uniformly set to app: {{app_name}}. Allocation of resources
The node configuration strategy takes the project group as the basis of which Node nodes the Pod of each project runs on, and the Pod of the projects belonging to the same project group runs on the same batch of Node nodes. The specific operation method is to label each Node node as group:__GROUP_NAME__, and make the following settings in the deployment.yml file to select the Node node of Pod:
... Spec:... Template:... Spec:... Affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-key: group operator: In values:-_ _ GROUP_NAME__.
Resource request size. For some important online applications, limit and request settings are the same. When resources are insufficient, Kubernetes will give priority to ensuring the normal operation of these Pod. In order to improve resource utilization. For some non-core applications that do not occupy resources for a long time, the request of Pod can be reduced appropriately, so that Pod can be allocated to nodes with less abundant resources to improve the utilization rate. However, when the node is short of resources, it will also be expelled or oom kill first.
Health check (Liveness/Readiness) configuration
Liveness is mainly used to detect whether the container is alive. If the monitoring check fails, the container will be restarted. Readiness determines whether to join the forwarding list of Service to receive request traffic by monitoring whether the container provides services normally. Readiness can play an important role in the upgrade process to prevent the replacement of the old version of Pod by the new version of Pod which is abnormal during the upgrade, resulting in the failure of the whole application to provide services.
Each service must provide an interface that can be accessed normally, and configure the corresponding monitoring and detection policy in the deployment.yml file.
... spec:... Template:... Spec:... Containers:-name: fpm livenessProbe: httpGet: path: / _ _ PROJECT_NAME__ port: 80 initialDelaySeconds: 3 periodSeconds: 5 readinessProbe: httpGet: path: / _ PROJECT_NAME__ port: 80 initialDelaySeconds: 3 periodSeconds: 5. Upgrade policy configuration
Upgrade strategy We choose the way of RollingUpdate, that is, during the upgrade process, we gradually create a new version of Pod, wait for the new Pod to start normally, gradually kill the old version of Pod, and finally replace all the new version of Pod with the old version of Pod.
We can also set the values of maxSurge and maxUnavailable to control the maximum percentage of Pod that can be increased during the upgrade process and the maximum percentage of Pod that is unavailable during the upgrade process.
... spec:... Strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate... Log configuration
Log-tail is used to collect container logs, and the logs of all services are reported to a log-store of Ali Cloud Log service. Configure it in the deployment.yml file as follows:
... spec:... Template:... Spec:... Containers:-name: fpm env:-name: aliyun_logs_vpgame value: stdout-name: aliyun_logs_vpgame_tags value: topic=__APP_NAME__.
Specify the uploaded Logstore and the corresponding tag by setting the environment variable, where name represents the name of the Logstore. The logs of different services are distinguished by the topic field.
Monitoring configuration
By adding annotations to Deployment, Prometheus can obtain the business monitoring data of each Pod. Examples of configurations are as follows:
... spec:... Template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/port: "80" prometheus.io/path: / {{project_name}} / metrics.
Where prometheus.io/scrape: "true" indicates that it can be obtained by Prometheus, prometheus.io/port represents the port of monitoring data, and prometheus.io/path represents the path to obtain monitoring data.
Service.yml configuration
The service.yml file mainly describes Service.
ApiVersion: v1kind: Servicemetadata: annotations: service.beta.kubernetes.io/alicloud-loadbalancer-address-type: intranet labels: app: _ _ APP_NAME__ name: _ _ APP_NAME__ namespace: _ NAMESPACE__spec: ports:-port: 80 protocol: TCP targetPort: 80 selector: app: _ _ APP_NAME__ type: LoadBalancer
The definition of Service is much simpler than that of Deoloyment. By defining the relevant parameters of spec.ports, you can specify that the exposed port of Service has been forwarded to the port of the backend Pod. Spec.selector is the label that specifies the Pod to be forwarded.
In addition, our side provides services through the type of load balancer, which is achieved by defining spec.type as LoadBalancer. By adding metadata.annotations to service.beta.kubernetes.io/alicloud-loadbalancer-address-type: intranet, you can create an Ali Cloud private network SLB as the entry point for the Service request traffic while creating the Service.
As shown in the figure above, EXTERNAL-IP is the IP of SLB.
Summary
On the basis of the above work, we divide each service into several categories (currently basically divided by language), and then develop a unified CI/CD process for the services in each category through. Gitlab-ci.yml. Similarly, services in the same category share a Deployment and Service template. In this way, we can migrate services to the Kubernetes environment quickly and efficiently.
Of course, this is only the first step in the migration practice, and the stability, performance, and automatic scaling of services in Kubernetes need to be further explored and studied.
"Alibaba Cloud's native Wechat official account (ID:Alicloudnative) focuses on micro-services, Serverless, containers, Service Mesh and other technology areas, focuses on cloud native popular technology trends, and large-scale cloud native landing practices, and is the technical official account that best understands cloud native developers."
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.