Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the CICD process based on GitLab?

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

What is the CICD process based on GitLab? aiming at this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.

Recently, I want to modify the company's Jenkins-based automated build to GitLab, mainly because there is no permission control on Jenkins, and everyone uses the same account, resulting in source code leakage for different project groups; in addition, there is an independent server that uses Jenkins, which feels like a waste of resources.

Configure the runner of GitLab to get the Runner registered token from the gitlab. There are three ways.

By project

Project-> Settings-> CI/CD- > Runner

Group by group

Group-> Settings-> CI/CD- > Runner

Overall situation

Central Administration-> Overview-> Runner

Run Runner

Download the corresponding helm package from gitee, address

Modify gitlabUrl, runnerRegistrationToken, tags of values file

Execute the following command

Helm package .helm install-- namespace gitlab--name gitlab-runner * .tgz

Description:

GitlabUrl is your gitlab private server address.

RunnerRegistrationToken is the token value of the runner registration obtained from step 1

The job of the runner,stage identified by tags selects the corresponding runner based on the tag.

Chart.yaml defines the basic information of the helm. Note that the name field must be the same as the name of the current directory.

Templates/pvc.yaml defines a PVC of Dynamic Provisioning. The storageClassName used is Aliyun's alicloud-disk-efficiency (our K8s is Ariyun's container service). It automatically creates a corresponding PV and creates a cloud disk instance. This can be used to handle cache files between multiple Job

The runners.kubernetes.volumes.pvc of the templates/configmap.yaml file defines the name of the PVC, that is, the PVC name defined in the previous step. If there are any modifications, pay attention to synchronization.

At this point, a gitlab-runner should be able to register with the corresponding gitlab server.

Configure environment variables

For more private information used in the construction process, it should be directly configured on the environment variables of gitlab server. Here, we mainly configure three parameters:

Password of REGISTRY_PASSWORD Harbor private server

User name of REGISTRY_USERNAME Harbor private server

KUBE_CONFIG kubectl executes the required account, certificate and other information. The string can be obtained using the following command

Echo $(cat ~ / .kube/config | base64) | tr-d ""

The environment variable here is the environment variable of the CI/CD page configured under group

For the configuration of the following containers to be manually configured, it would be better to write in Dockerfile. This is just to better illustrate the production process of the basic image.

Configure the Node container

Node container is mainly used to compile front-end projects, generally mainly using yarn download dependency, npm compilation and packaging. So the Node container needs to contain these two commands.

$docker pull node / / pull the latest node image $docker run-it-- rm-- name node node / bin/sh / / run the node image And the source for entering the $yarn config set registry https://registry.npm.taobao.org / / configuration yarn is Taobaoyuan $yarn config list / / to view the configuration-- info yarn config {'version-tag-prefix':' vault, 'version-git-tag': true,' version-commit-hooks': true, 'version-git-sign': false 'version-git-message':' v% slots, 'init-version':' 1.0.0 percent, 'init-license':' MIT', 'save-prefix':' ^', 'bin-links': true,' ignore-scripts': false, 'ignore-optional': false, registry:' https://registry.yarnpkg.com', 'strict-ssl': true,' user-agent': 'yarn/1.16.0 npm/? Node/v12.5.0 linux x64frames, version: '1.16.0'} info npm config {version:' 1.16.0'} Done in 0.07s.Murray $docker commit node harbor_url/tools/node-taobao / / opens another window Submit the modified node image $docker push harbor_url/tools/node-taobao / / push the image to the Harbor private server configuration Java container

The Java container is mainly used to compile Java projects, mainly using JDK and MAVEN

$docker pull alpine / / pull the latest alpine image $docker run-it-- rm-- name java alpine / / enter the image $mkdir-p / opt/java/ / create the java directory $mkdir-p / opt/maven / / create the maven directory $docker cp jdk-8u211-linux-x64.tar.gz java:/opt/java/ copy JDK from the host to the container internal $docker cp apache-maven-3.6.1-bin.tar.gz Java:/opt/maven/ copy MAVEN from the host to the inside of the container / opt/maven $tar-xzvf apache-maven-3.6.1-bin.tar.gz / / extract MAVEN/opt/maven $rm-rf apache-maven-3.6.1-bin.tar.gz / / remove the MAVEN package / opt/java $tar-xzvf jdk-8u211-linux-x64.tar.gz / / extract JDK inside the container / opt/java $rm-rf jdk-8u211-linux-x64.tar.gz / / Delete the JDK package $vi / etc/profile / / configure the environment variable-export JAVA_HOME=/opt/java/jdk1.8.0_211export M2_HOME=/opt/maven/apache-maven-3.6.1export PATH=/usr/local/sbin: / usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$JAVA_HOME/bin:$M2_HOME/bin--//Java is based on GUN Standard C library (glibc) Alpine is based on MUSL libc (mini libc) So you need to install the glibc library / / reference address: https://blog.csdn.net/Dannyvon/article/details/80092834$ echo http://mirrors.ustc.edu.cn/alpine/v3.10/main > / etc/apk/repositories$ echo http://mirrors.ustc.edu.cn/alpine/v3.10/community > > / etc/apk/repositories$ apk update$ apk-- no-cache add ca-certificates$ wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/ 2.27 glibc etc/profile / / effective environment variable $java-version / / View the JAVA version- -java version "1.8.0211" Java (TM) SE Runtime Environment (build 1.8.0_211-b12) Java HotSpot (TM) 64-Bit Server VM (build 25.211-b12) Mixed mode)-- $mvn-v / / View MAVEN version-- Apache Maven 3.6.1 (d66c9c0b3152b2e69ee9bac180bb8fcc8e6af555 2019-04-04T19:00:29Z) Maven home: / opt/maven/apache-maven-3.6.1Java version: 1.8.0211,vendor: Oracle Corporation, runtime: / opt/java/jdk1.8.0_211/jreDefault locale: en_US, platform encoding: ANSI_X3.4-1968OS name: "linux", version: "4.9.125-linuxkit", arch: "amd64" Family: "unix"-- $docker commit java harbor_url/tools/java:1.8 / / in another window Submit the image $docker push harbor_url/tools/java:1.8 / / push the image to the Harbor private server

Note that the source / etc/profile command is only currently valid. After submitting the image, rerunning the container with the image needs to be executed again. Exactly how it will take effect permanently is still unknown. If the boss knows, you can let me know.

Configure the Curl&Git container

This container just adapts to the situation of our company, and the front-end packaging of our company does not need a Node container. The front-end packaging process is to package and compile locally to generate files in the dist directory, and then compress and upload them to the oss on the intranet. Then the script is executed on Jenkins, which is actually downloaded and decompressed from the oss of the private network, and then the business image is created according to Dockerfile. This process requires the use of Curl and Git commands. The whole process is to solve the problem that the online packaging environment may be inconsistent with the local development, and the front-end packaging requires download dependency and compilation time-consuming. Through such a process, the front-end build time is very short. It can be done in a few seconds.

$docker run-it-rm-name curl-git alpine$ echo http://mirrors.ustc.edu.cn/alpine/v3.10/main > / etc/apk/repositories$ echo http://mirrors.ustc.edu.cn/alpine/v3.10/community > > / etc/apk/repositories$ apk update$ apk add curl$ apk add git$ curl- V--curl 7.61.1 (x86 _ 64-alpine-linux-musl) libcurl/7.61.1 LibreSSL/2.6.5 zlib/1.2.11 libssh3/1.8.2Release-Date: 2018-09-05Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftpFeatures: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz UnixSockets HTTPS-proxy--$ git version- -- git version 2.15.3 Murray $docker commit curl-git harbor_url/tools/curl-git / / in another window Submit the image $docker push harbor_url/tools/curl-git / / push the image to the Harbor private server configuration kubectl container

The kubectl container is used to deploy applications to the K8s cluster and requires configuration information for config. However, this information is generally private and will not be typed directly into the container. There will only be a kubectl client in the container, and then the specific configuration file will be placed in the ~ / .kube / config file during operation.

$docker run-it-- rm-- name kubectl alpine / / run and enter the container $docker cp kubectl kubectl:/usr/bin/ copy the kubectl client inside the container. The kubectl client can find it directly in the master node of K8s, or download the corresponding version of the client $chmod + x / usr/bin/kubectl / / directly on github to the executable $kubectl version / / view version, because there is no certificate to configure the service. So the information on the server cannot be printed-- Client Version: version.Info {Major: "1", Minor: "12 +", GitVersion: "v1.12.6-aliyun.1", GitCommit: "4304b26", GitTreeState: "", BuildDate: "2019-04-08T08:50:29Z", GoVersion: "go1.10.8", Compiler: "gc" Platform: "linux/amd64"} The connection to the server localhost:8080 was refused-did you specify the right host or port?--$ docker commit kubectl harbor_url/tools/kubectl:1.12.6 / / in another window Submit image $docker push harbor_url/tools/kubectl:1.12.6 / / push image to Harbor private server configuration. GitLab-ci.yml# if each stage does not use an image, use the default image image: $HARBOR_URL/tools/alpinestages:-build-deploy# global variable to define the variables: # image name, and tag uses pipeline_id IMAGE_NAME: / / # to define the application name by default That is, APP_NAME: # application ports such as deployment_name, container_name and service_name. Generally, the application port is consistent with the port of service. APP_PORT: 80 # defines the namespace Depending on the packaging command, replace it with the real namespace NAMESPACE: dev # HARBOR_URL HARBOR_URL: docker_build_job: # Image image: $HARBOR_URL/tools/curl-git stage: build # containing the curl and git tools defines that only the dev branch push or merge event triggers Job only: refs:-dev tags:-# using dind mode Connect to a container with startup docker services:-$HARBOR_URL/tools/docker-dind:18.09.7 variables: # docker daemon startup parameter DOCKER_DRIVER: overlay DOCKER_HOST: tcp://localhost:2375 # script command executed by the stage # 1. Execute the build script build.sh # 2. Log in to Harbor private server # 3. Build image # 4. Push image to Harbor script: # build.sh script according to Dockerfile, configure different environments according to the following parameters # the main process is to download the development package of the compressed file, and then decompress it Then replace the variable-sh. / build.sh-b development-docker login-u $REGISTRY_USERNAME-p $REGISTRY_PASSWORD $HARBOR_URL-docker build-t $IMAGE_NAME:$CI_PIPELINE_ID according to different environments. -docker push $IMAGE_NAME:$CI_PIPELINE_IDk8s_deploy_job: # contains the image of the kubectl tool image: $HARBOR_URL/tools/kubectl:1.12.6 stage: deploy only: refs:-dev tags:-# script command executed by the stage # 1. Create / etc/deploy/config file # 2. Write the certificate information of K8s to / etc/deploy/config # 3. Replace each variable in the deployment.yaml file # 4. Deploy to the k8s cluster environment script:-mkdir-p ~ / .kube-touch ~ / .kube/config # KUBE_CONFIG this environment variable is configured in gitlab server-echo $KUBE_CONFIG | base64-d > ~ / .kube/config # replace the deployment.yaml with the corresponding variable substitution-sed-I "s?IMAGE_TAG?$CI_PIPELINE_ID?g" deployment.yaml-sed-I? IMAGE_NAME?$IMAGE_NAME?g "deployment.yaml-sed-I" s?APP_NAME?$APP_NAME?g "deployment.yaml-sed-I" s?NAMESPACE?$NAMESPACE?g "deployment.yaml-sed-I" s?APP_PORT?$APP_PORT?g "deployment.yaml-kubectl apply-f deployment.yaml

The .gitlab-ci.yml file has less configuration based on node and java images. But they are more or less the same, so they are not repeated. The dist directory is usually generated after the node image is packaged. At this time, you can add a step to compress the dist directory, and then define artifacts, so that the compressed package will be uploaded to gitlab-server after the current stage is executed. Then the next stage will download the package automatically so that we can extract the package and then package it according to Dockerfile, also using dind mode. The same principle can be used for java images: mvn compiles and packages a jar package or war package, then passes it to the next stage, and then builds the image; however, if you use maven's docker plug-in, you don't have to divide it into two stage, just add a services definition to the image of java, so you can use the mvn docker:build docker:push command. Note, however, that with maven's docker plug-in, the image is defined in the pom.xml file, which needs to be synchronized with the image name defined in the external .gitlab-ci.yml file.

Node image part defines node_build_job: image: $HARBOR_URL/tools/node-taobao stage: package only: refs:-dev tags:-script: # download dependency-yarn # compilation package-npm run build-- qa # compressed dist directory-tar-czvf dist.tar.gz dist/ # definition artifacts The definition of gitlab-server artifacts: paths:-dist.tar.gzjava image is uploaded to java_build_job: image: $HARBOR_URL/tools/java:1.8 stage: package only: refs:-dev # services definition. Using dind mode, the docker container is actually linked to the java image through the link instruction. Enables the java image to use the docker command services:-$HARBOR_URL/tools/docker-dind:18.09.7 variables: DOCKER_DRIVER: overlay DOCKER_HOST: tcp://localhost:2375 tags:-script: # to re-enable the JAVA_HOME and M2_HOME environment variables to take effect-source / etc/profile-mvn-P$NAMESPACE clean package-cd-mvn-P$NAMESPACE docker:build docker:push configuration deployment.yaml

The deployment.yaml file contains two parts, the deployment object of K8s and the service object.

ApiVersion: apps/v1beta2kind: Deploymentmetadata: labels: app: APP_NAME name: APP_NAME namespace: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: APP_NAME strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: APP_NAME spec: affinity: {} containers:-image: 'IMAGE_NAME:IMAGE_TAG' imagePullPolicy: Always name: APP_NAME # frontend project Using a nginx basic image Generally, resources: limits: cpu:'1' memory: 64Mi requests: cpu:'0' memory: 32Mi terminationMessagePath: / dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30---apiVersion: v1kind: Servicemetadata: name: APP_NAME namespace: NAMESPACEspec: ports:-port: APP_PORT protocol: TCP targetPort: APP_PORT selector: app: APP_NAME sessionAffinity: None type: ClusterIP ends

At this point, a complete gitlab-based CICD process is ready to run. Because it runs with K8s, the whole process of building is still bumpy. For example, if you use javaj image but need to run the docker command, the definition of services will be confused without looking at the documentation; then the front-end commands such as yarn and mvn install will involve downloading dependent packages from the public network, and how to cache these dependent packages so that the next build can be directly used. This involves the related concepts and use of PV and PVC of K8s.

In addition, there is still dead content for the. gitlab-ci.yml variables, such as namespace, and another script is needed to replace the corresponding variables according to the packaging command. It still needs to be optimized.

This whole process runs down, and it feels like I've learned something more.

The answer to the question about the GitLab-based CICD process is shared here. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report