In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the use of Service Catalog and OC commands to deploy OpenShift applications, deploy basic concepts and processes, extend storage, clean up OpenShift objects, and so on. Taking the Spring Boot and Angular project of Angular 6 integrating Spring Boot 2 Spring Security,JWT and CORS as an example, S2I and Pipeline deployment methods are explained in detail.
Spring Boot project source code OKD version 3.11 source code heroes-api,Angular project source code heroes-web.
First acquaintance with OpenShift deployment Service Catalog
The initial installation of OpenShift contains some sample APP for you to learn and use. Among them are Apache HTTP Server and Apache HTTP Server (httpd). What's the difference between them? Click to enter separately to find:
Apache HTTP Server is deployed using template (the template name is httpd-example).
Apache HTTP Server (httpd) is deployed using builder image (image stream name is httpd).
The Service Catalog sample uses both template and builder image (image+source) deployment methods. Enter the openshift project in Application Console to view template and image.
To view template, click Resources-> Other Resources-> Template:
To view Image Stream, click Builds-> Images:
Other deployment methods
In Service Catalog, in addition to selecting Item directly from Catalog, there are three other ways:
Deploy Image can deploy applications directly from image or image stream:
Import YAML / JSON is used to create resources from YAML or JSON, such as image stream, template:
Select from Project selects template from the specified Project to deploy the application:
Deploy Apache HTTP Server
The two deployment methods of Apache HTTP Server are essentially the same. The Build strategy is S2I (Source-to-Image), and the Docker image built by S2I is used to deploy the application. All Source use Apache HTTP Server (httpd) S2I Sample Application,Docker Base Mirror (builder image) and all use Apache HTTP Server Container Image. Httpd-example template defines the overall deployment process and implements parameterization.
The following is the definition of the BuildConfig section of httpd-example template:
-apiVersion: v1 kind: BuildConfig metadata: annotations: description: Defines how to build the application template.alpha.openshift.io/wait-for-ready: 'true' name:' ${NAME} 'spec: output: to: kind: ImageStreamTag name:' ${NAME}: latest' source: contextDir:'${CONTEXT_DIR} 'git: ref:' $ {SOURCE_REPOSITORY_REF} 'uri: ${SOURCE_REPOSITORY_URL}' type: Git strategy: sourceStrategy: from: kind: ImageStreamTag name: 'httpd:2.4' namespace:' ${NAMESPACE} 'type: Source triggers:-type: ImageChange-type: ConfigChange-github: secret:' $ {GITHUB_WEBHOOK_SECRET} 'type: GitHub-generic: secret:' ${GENERIC_WEBHOOK_SECRET} 'type: Generic
Parameter definition and default value:
Parameters:-description: The name assigned to all of the frontend objects defined in this template. DisplayName: Name name: NAME required: true value: httpd-example-description: The OpenShift Namespace where the ImageStream resides. DisplayName: Namespace name: NAMESPACE required: true value: openshift-description: Maximum amount of memory the container can use. DisplayName: Memory Limit name: MEMORY_LIMIT required: true value: 512Mi-description: The URL of the repository with your application source code. DisplayName: Git Repository URL name: SOURCE_REPOSITORY_URL required: true value: 'https://github.com/openshift/httpd-ex.git'-description: >-Set this to a branch name, tag or other ref of your repository if you are not using the default branch. DisplayName: Git Reference name: SOURCE_REPOSITORY_REF-description: >-Set this to the relative path to your project if it is not in the root of your repository. DisplayName: Context Directory name: CONTEXT_DIR-description: >-The exposed hostname that will route to the httpd service, if left blank a value will be defaulted. DisplayName: Application Hostname name: APPLICATION_DOMAIN...
Let's first use builder image to deploy Apache to understand the overall process of deployment:
Click "advanced options", you can set git branch, context, secret, custom Route, Build Configuration, Deployment Configuration, Resource Limits and so on. Click Create after completing the basic content here, create an App, and then enter Project Overview from the success page:
Service, Route, Build, Deployment, Image are automatically created during deployment. Enter the Applications and Builds of Application Console to view the details, in which three pod:httpd-1-build, http-1-deploy and httpd-1-xxxxx will be created, and the http-1-deploy will be deleted automatically after deployment.
After the deployment is successful, the test accesses Apache Server (Hostname defined by Route), and the page is as follows:
The basic concepts involved are explained below.
Basic concept
Service (Kubernetes Service)
Internal load balancer, used in the OpenShift internal network, can be accessed using Service ClusterIP or Hostname.
ApiVersion: v1kind: Servicemetadata: annotations: openshift.io/generated-by: OpenShiftWebConsole creationTimestamp: '2019-03-26T02 annotations 12annotations 50Z' labels: app: httpd name: httpd namespace: my-project resourceVersion: '3004428' selfLink: / api/v1/namespaces/my-project/services/httpd uid: a81c759f-4f6c-11e9-9a7d-02fa2ffc40e6spec: clusterIP: 172.30.225.159 ports:-name: 8080-tcp port: 8080 protocol: TCP targetPort: 8080 selector: deploymentconfig: httpd sessionAffinity: None type: ClusterIPstatus: loadBalancer: {}
Among them, selector defines the label to find container (pod) for load balancing.
Route
Define a hostname to expose the Service so that external customers can access the Service. The default hostname is: [app-name]-[project-name]. [openshift_master_default_subdomain].
Build
Build App Image, using S2I to build App Image from builder image and Source Code. The default builder image and build configurations are re-build when the configuration changes.
Looking at the YAML text of Builds-> httpd-> # 1, you can see that the Build process is FetchInputs-> Assemble-> CommitContainer-> PushImage:
.. status: completionTimestamp: '2019-03-26T02 config: kind: BuildConfig name: httpd namespace: my-project duration: 4000000000 output: to: imageDigest:' sha256:5c1f20f20baaa796f4518d11ded13c6fac33e7a377774cfec77aa1e6e6a7cbb2' outputDockerImageReference: 'docker-registry.default.svc:5000/my-project/httpd:latest' phase: Complete stages:-durationMilliseconds: 3434 name: FetchInputs startTime:' 2019-03-26T0212 Swiss 56Z' steps: -durationMilliseconds: 3434 name: FetchGitSource startTime: '2019-03-26T02 durationMilliseconds'-durationMilliseconds: 2127 name: CommitContainer startTime: '2019-03-26T02 name:' 2019-03-26T02 FetchGitSource startTime: 2127 name: CommitContainer startTime: '2019-03-26T02FetchGitSource startTime: 3426 name: Assemble startTime:' 2019-03-26T02: 13name 10Z' steps:-durationMilliseconds: 3426 name: AssembleBuildScripts startTime: '2019-03-26T02 durationMilliseconds: 16143 name: PushImage startTime:' 2019-03-26T02 durationMilliseconds: '2019-03-26T02 durationMilliseconds:' 16143 name: PushImage startTime: '2019-03-26T02Switzerland 1314Z' startTimestamp:' 2019-03-26T02
Build Strategy
OpenShift supports four kinds of Build Strategy: Source-to-Image, Docker, Pipeline and Custom.
Strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest" spec: source: git: uri: "https://github.com/openshift/ruby-hello-world" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filenamestrategy: customStrategy: from: Kind: "DockerImage" name: "openshift/sti-image-builder"
Deployment
Deployment App Image,Deployment consists of three objects: DeploymentConfig, ReplicationController, and Pod. DeploymentConfig contains deployment policies, image configuration, environment variables, and so on, and ReplicationController contains replication-related information. Deploy is automatically re-Deploy when the App Image and deployment configuration changes.
Enter Deployments-> httpd-> # 1, edit Replicas or adjust the number of pods to add or delete pod:
Deployment Strategy
Modify or upgrade App, that is, the deployment method when redeploying the application. Deployment configuration (DeploymentConfig) supports three strategies: Rolling, Recreate, and Custom. The blue / green deployment and the A big B deployment can be realized by modifying the Route.
The default policy of Rolling is to scale down the old version of Pod only when the new version's Pod status changes to Ready. It is possible that the old version of PodRecreate terminates all Pod (Scale down the previous deployment to zero) before deploying the new PodCustom custom deployment behavior.
ImageStream
The way OpenShift manages container images, which defines the mapping relationship between dockerImageReference,ImageStream tag and each version of docker image. An ImageStream is automatically created when Build succeeds.
ApiVersion: image.openshift.io/v1kind: ImageStreammetadata: annotations: openshift.io/generated-by: OpenShiftWebConsole creationTimestamp: '2019-03-26T02 annotations 12annotations 50Z' generation: 1 labels: app: httpd name: httpd namespace: my-project resourceVersion: '3004571' selfLink: / apis/image.openshift.io/v1/namespaces/my-project/imagestreams/httpd uid: a81b14bf-4f6c-11e9-9a7d-02fa2ffc40e6spec: lookupPolicy: local: falsestatus: dockerImageRepository: 'docker-registry.default .svc:5000/my-project/httpd' tags:-items:-created: '2019-03-26T02 dockerImageReference 13 dockerImageReference: >-docker-registry.default.svc:5000/my-project/httpd@sha256:5c1f20f20baaa796f4518d11ded13c6fac33e7a377774cfec77aa1e6e6a7cbb2 generation: 1 image: >-sha256:5c1f20f20baaa796f4518d11ded13c6fac33e7a377774cfec77aa1e6e6a7cbb2 tag: latest
Template
Define the overall deployment process and implement parameterization, including Service, Route, ImageStream, BuildConfig, DeploymentConfig, parameters and so on.
Knowing the above basic concepts, it's easy to understand httpd-example template, and you can deploy your own tests, which I won't repeat here.
OC Tool
Deploy applications using oc new-app
Before continuing, delete the previously created test project or create a new project.
$oc delete project my-project$ oc new-project my-project
In the Service Catalog section, we mentioned three ways to deploy applications: template, builder image (image+source) and image. The corresponding commands are as follows:
$oc new-app httpd-example-p APPLICATION_DOMAIN=httpd-example.apps.itrunner.org$ oc new-app openshift/httpd:2.4~ https://github.com/openshift/httpd-ex.git-name=httpd-ex$ oc new-app my-project/httpd-ex-name=httpd
Description:
The syntax of image+source is [image] ~ [source] the image used in the third method is the latter two methods generated in the second method, the Route is not automatically created, but needs to be created manually: $oc expose service httpd-ex-- name httpd-ex-- hostname=httpd-ex.apps.itrunner.org$ oc expose service httpd--name httpd--hostname=httpd.apps.itrunner.org
To create a resource from JSON/YAML:
$oc create-f-n
Using the oc command, you can also create applications directly from source code, using local or remote source code:
$oc new-app / path/to/source/code$ oc new-app https://github.com/sclorg/cakephp-ex
You can specify subdirectories:
$oc new-app https://github.com/sclorg/s2i-ruby-container.git-- context-dir=2.0/test/puma-test-app
You can specify branch:
$oc new-app https://github.com/openshift/ruby-hello-world.git#beta4
OpenShift automatically detects the code root or specified directory, using Docker build policy if Dockerfile exists, Pipeline build policy if Jenkinsfile exists, otherwise Source build policy (S2I).
The following example uses the Source build policy:
$oc new-app https://github.com/sclorg/cakephp-ex
When using the Source build policy, new-app determines the language builder by detecting the root directory or the files in the specified directory:
LanguageFilesjeepom.xmlnodejsapp.json, package .jsonperlcpanfile, index.plphpcomposer.json, index.phppythonrequirements.txt, setup.pyrubyGemfile, Rakefile, config.ruscalabuild.sbtgolangGodeps, main.go
Then look for the image that matches the language from OpenShift Server or Docker Hub Registry based on the language.
You can also specify a policy, as follows:
$oc new-app / home/user/code/myapp-- strategy=docker
View template and image stream
View all template and image stream:
$oc new-app-list
View template or image stream separately:
$oc get templates-n openshift$ oc get imagestreams-n openshift
View httpd-example template details:
$oc describe template httpd-example-n openshift
View httpd image stream details:
$oc describe imagestream httpd-n openshift
View the YAML definition of httpd-example template:
$oc new-app-search-template=httpd-example-output=yaml
Look for "httpd" in all template, image stream, docker image:
$oc new-app-- search httpd talk about Route again
In the previous example, the protocol used by Route is http. How can https be enabled?
When using Web Console, edit route to enable Secure route:
There are three types of TLS Termination: edge, passthrough, reencrypt
Edge accesses route using the https protocol, route to the internal network is unencrypted, and if no certificate is configured, the default certificate is used. Reencrypt all access paths are encrypted passthrough encrypted communications are sent directly to the destination, route does not need to provide TLS Termination.
Use the oc command to create a route:
$oc create route edge httpd-ex-- service httpd-ex-- hostname httpd-ex.apps.itrunner.org-- path /-- insecure-policy Redirect-n my-projectS2I
Source-to-Image (S2I) is a framework that makes it easy to generate a new docker image from the application source code as input.
Using S2I to build image, a large number of complex operations can be performed during the assembly process, all of which only create a new layer, speeding up the process. S2I makes software development engineers do not have to care about the production of docker image, but is only responsible for writing scripts such as assemble and run, and can also prevent development engineers from performing inappropriate operations such as arbitrary yum installation in the process of image construction. S2I simplifies the production of docker image.
S2I requires the following three basic elements:
Builder image S2I builds a new imageSourcesS2I Scripts based on image
During the build process, S2I first takes sources and scripts, packages them into tar files, and puts them into builder image. Before executing assemble script, S2I unzips the tar file to the directory specified by io.openshift.s2i.destination, and the default directory is / tmp (unzipped to / tmp/src, / tmp/scripts directory, respectively).
S2I Scripts
S2I Scripts can be located in the following locations, with priority from high to low:
Specify the location in BuildConfig strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" scripts: "the .s2i / bin directory of the source code of the http://somehost.com/scripts_directory" application is defined in the Dockerfile of builder image (this is used in the sample Apache HTTP Server) LABEL io.openshift.s2i.scripts-url=" image:///usr/libexec/s2i "
Both io.openshift.s2i.scripts-url and BuildConfig define locations in the following form:
Absolute path of image:///path_to_scripts_dir-image file:///path_to_scripts_dir-host absolute or relative path http (s): / / path_to_scripts_dir-URL
S2I Scripts:
ScriptDescriptionassemble (required) gets the source code, compiles, and packages. In incremental compilation, if save-artifacts is defined, first restore artifactrun (required) to run the application save-artifacts (optional) to collect dependencies to speed up subsequent compilation, for example, .m2usage (optional) shows that image uses help information test/run (optional) to check whether image is working properly.
Example assemble script
#! / bin/bash# restore build artifactsif ["$(ls / tmp/artifacts/ 2 > / dev/null)"]; then mv / tmp/artifacts/* $HOME/.fi# move the application sourcemv / tmp/s2i/src $HOME/src# build application artifactspushd ${HOME} make all# install the artifactsmake installpopd
Example run script
#! / bin/bash# run the application/opt/application/run.sh
Example save-artifacts script
#! / bin/bash# Besides the tar command, all other output to standard out must # be surpressed. Otherwise, the tar stream will be corrupted.pushd ${HOME} > / dev/nullif [- d deps]; then # all deps contents to tar stream tar cf-depsfipopd > / dev/null
Note: save-artifacts can only have tar stream output and cannot contain any other output.
Example usage script
#! / bin/bash# inform the user how to use the imagecat / dev/null; then if [- w / etc/passwd]; then echo "${USER_NAME:-default}: XRV $(id-u): 0then echo ${USER_NAME:-default} user:$ {HOME}: / sbin/nologin" > / etc/passwd fifiexec "$@"
Compile builder image and upload it to Registry
# docker build-t heroes-api-centos7:v1.0.0. # docker tag heroes-api-centos7:v1.0.0 registry.itrunner.org/heroes-api-centos7:v1.0.0# docker push registry.itrunner.org/heroes-api-centos7:v1.0.0
Import image
$oc import-image heroes-api-centos7:v1.0.0-n heroes--confirm-- insecure-- from='registry.itrunner.org/heroes-api-centos7:v1.0.0'
The parameter-n heroes,image is specified in the import command to import into the heroes project. After successful import, you can view the image in Builds-> Images of the project, but it is still not visible in Service Catalog. You need to modify the Image Stream definition and add annotations:
ApiVersion: image.openshift.io/v1kind: ImageStreammetadata: annotations: openshift.io/image.dockerRepositoryCheck: '2019-03-27T08 selfLink: / / apis/image.openshift.io/v1/namespaces/heroes/imagestreams/heroes-api-centos7 uid: e280929e-506b-11e9-a2ec-0288bf58ecc2spec: lookupPolicy: local: false tags:' 2019-03-27T08 apis/image.openshift.io/v1/namespaces/heroes/imagestreams/heroes-api-centos7 uid: lookupPolicy: local: false tags: -annotations: description: build heroes-api on CentOS 7 iconClass: icon-spring openshift.io/display-name: Heroes API openshift.io/provider-display-name: itrunner sampleRepo: 'https://github.com/sunjc/heroes-api.git' supports: itrunner tags:' builder Java' version: '1.0.0' from: kind: DockerImage name:' registry.itrunner.org/heroes-api-centos7:v1.0.0'...
In order to display in Service Catalog, tags must contain "builder". Only the image imported into the openshift project is globally visible, otherwise it is only visible in the Catalog of this project. Please refresh the page if it is not displayed.
S2I ScriptsassemblesassemblesBash-e# restore build artifactsif [- d / tmp/artifacts/.m2] Then echo "restore build artifacts" mv / tmp/artifacts/.m2 $HOME/.fi# move the application sourcemv / tmp/src $HOME/src# build the application artifactspushd $HOME/srcmvn clean package-Pdev-Dmaven.test.skip=truepopd# move the artifactsmv $HOME/src/target/heroes-api-1.0.0.jar $HOME/rm-rf $HomeGetBASHjava-jar $HOMEGUBUBUBH java-jar $HOMEBUBUBUBUBUBH # Besides the tar command, all other output to standard out must be surpressed. Otherwise, the tar stream will be corrupted.pushd ${HOME} > / dev/nullif [- d. M2]; then # all .m 2 contents to tar stream tar cf-.m2fipopd > / devamp nullusagecards. Bash # inform the user how to use the imagecat / dev/nullif [- d node_modules] Then # all node_modules contents to tar stream tar cf-node_modulesfipopd > / devbind nullusagebind build heroes-web on CentOS. WARNING: By selecting this tag, your application will automatically update to use the latest version. IconClass: icon-angularjs openshift.io/display-name: Heroes Web (Latest) openshift.io/provider-display-name: itrunner sampleRepo: 'https://github.com/sunjc/heroes-web.git' supports: itrunner tags:' builder Javascript' from: kind: DockerImage name: 'docker-registry.default.svc:5000/heroes/heroes-web-centos7:latest' generation: 2 importPolicy: {} name: latest referencePolicy: type: Source-annotations: description: build heroes-web on CentOS 7 iconClass: icon-angularjs openshift.io/display-name: Heroes Web 1.0.0 openshift. Io/provider-display-name: itrunner sampleRepo: 'https://github.com/sunjc/heroes-web.git' supports: itrunner tags:' builder Javascript' version: 1.0.0 from: kind: DockerImage name: 'registry.itrunner.org/heroes-web-centos7:v1.0.0' generation: 1 importPolicy: insecure: true name: v1.0.0 referencePolicy: type: Sourcestatus: dockerImageRepository:' docker-registry.default.svc:5000/heroes/heroes-web-centos7' tags:-items: -created: '2019-04-08T07vis37 docker-registry.default.svc:5000/heroes/heroes-web-centos7@sha256:7e4126ec9ec0d4158d962936a38f255806731d33d6fe03b29d95d82759823fcd generation 11Z' dockerImageReference: >-docker-registry.default.svc:5000/heroes/heroes-web-centos7@sha256:7e4126ec9ec0d4158d962936a38f255806731d33d6fe03b29d95d82759823fcd generation: 2 image: >-sha256:7e4126ec9ec0d4158d962936a38f255806731d33d6fe03b29d95d82759823fcd tag: latest-items:-created:' 2019-04-01T08Visual02Visu37Z' dockerImageReference: >- Registry.itrunner.org/heroes-web-centos7@sha256:7e4126ec9ec0d4158d962936a38f255806731d33d6fe03b29d95d82759823fcd generation: 1 image: >-sha256:7e4126ec9ec0d4158d962936a38f255806731d33d6fe03b29d95d82759823fcd tag: v1.0.0
Reference Policy
When using image imported from an external Registry, the reference Policy allows you to specify where to extract the image. There are two options: Local and Source:
Local extract imageSource from OpenShift Internal Registry extract image directly from external Registry
Query Image Stream information
$oc describe is/heroes-web-centos7 $oc describe istag/heroes-web-centos7:latest
Add tag for external Image
$oc tag docker.io/openshift/base-centos7:latest base-centos7:latest
Attach tag to Image Stream
Attach a tag to an existing tag:
$oc tag heroes-api-centos7:v1.0.0 heroes-api-centos7:latest
Update tag
$oc tag heroes-api-centos7:v1.0.1 heroes-api-centos7:latest
Description: unlike additional tag, both tag should exist
Delete tag
$oc tag-d heroes-api-centos7:v1.0.0
Or
$oc delete istag/heroes-api-centos7:v1.0.0
Delete Image Stream
$oc delete is base-centos7
Update tag regularly
You can use-- scheduled:
$oc tag docker.io/python:3.6.0 python:3.6-scheduled
You can also set importPolicy.scheduled to true in the tag definition:
ApiVersion: v1kind: ImageStreammetadata: name: pythonspec: tags:-from: kind: DockerImage name: docker.io/python:3.6.0 name: latest importPolicy: scheduled: true
The period defaults to 15 minutes.
Binary Build
Binary Build is mainly used in testing and Jenkins pipeline scenarios. If developers want to test it before submitting the source code, they can use the local source code to build App Image. Binary Build cannot be triggered automatically and can only be executed manually.
Binary Build uses the oc start-build command, which requires BuildConfig or existing build to be provided, and supports building App Image from several of the following source:
-- from-file, such as Dockerfile--from-dir local directory, which is packaged by start-build and uploaded to openshift--from-archive tar, tar.gz, zip, etc.-- from-repo local source directory, and start-build packages the latest commit code and uploads it to openshift
From a directory
$oc start-build heroes-web--from-dir= "."-- follow or $oc start-build-- from-build=heroes-web-1-- from-dir= "."-- follow
From a Git repository
$git commit-m "My changes" $oc start-build heroes-web-- from-repo= "."-- follow
Jenkins Pipeline
... stage ("Build Image") {steps {dir ('heroes-web/dist') {sh' oc start-build heroes-web-- from-dir.-- follow'}}... Pipeline deployment
Jenkins is a widely used CI tool, and most engineers have experience and are more accustomed to deploying applications using Jenkins. Using Jenkins Pipeline to deploy OpenShift applications does not affect the original process, and can normally perform operations such as testing, code quality check, compilation and packaging, and so on. You only need to call oc start-build during deployment.
Install Jenkins
OpenShift provides two Jenkins Template:jenkins-ephemeral, jenkins-persistent, one using transient storage and one using persistent storage, both using jenkins image stream (docker.io/openshift/jenkins-2-centos7:v3.11). Jenkins image installed OpenShift Client, OpenShift Login, OpenShift Sync, Kubernetes, Kubernetes Credentialst and other plug-ins. After installation, you can run either Job in OpenShift or Job in Jenkins.
Jenkins-persistent is configured with dynamic storage, and the default name of PVC is jenkins (JENKINS_SERVICE_NAME):
-apiVersion: v1 kind: PersistentVolumeClaim metadata: name:'${JENKINS_SERVICE_NAME} 'spec: accessModes:-ReadWriteOnce resources: requests: storage:' ${VOLUME_CAPACITY} 'storageClassName: glusterfs-storage-block- apiVersion: v1 kind: DeploymentConfig metadata: annotations: template.alpha.openshift.io/wait-for-ready:' true' name:'${JENKINS_SERVICE_NAME} 'spec:. Template: metadata: labels: name:'${JENKINS_SERVICE_NAME} 'spec: containers:-capabilities: {}. VolumeMounts:-mountPath: / var/lib/jenkins name:'${JENKINS_SERVICE_NAME}-data'. Volumes:-name:'${JENKINS_SERVICE_NAME}-data' persistentVolumeClaim: claimName:'${JENKINS_SERVICE_NAME}'
If you need to modify Template first to use GlusterFS, specify storageClassName:
-apiVersion: v1 kind: PersistentVolumeClaim metadata: name:'${JENKINS_SERVICE_NAME} 'spec: accessModes:-ReadWriteOnce resources: requests: storage:' ${VOLUME_CAPACITY} 'storageClassName: glusterfs-storage-block
Use jenkins-persistent to install below, with the following command:
$oc project heroes$ oc new-app jenkins-persistent-p VOLUME_CAPACITY=2Gi-p MEMORY_LIMIT=2Gi-- > Deploying template "openshift/jenkins-persistent" to project heroes Jenkins-Jenkins service, with persistent storage. NOTE: You must have persistent volumes available in your cluster to use this template. A Jenkins service has been created in your project. Log into Jenkins with your OpenShift account. The tutorial at https://github.com/openshift/origin/blob/master/examples/jenkins/README.md contains more information about using this template. * With parameters: * Jenkins Service Name=jenkins * Jenkins JNLP Service Name=jenkins-jnlp * Enable OAuth in Jenkins=true * Memory Limit=2Gi * Volume Capacity=2Gi * Jenkins ImageStream Namespace=openshift * Disable memory intensive administrative monitors=false * Jenkins ImageStreamTag=jenkins:2 * Fatal Error Log File=false-- > Creating resources. Route.route.openshift.io "jenkins" created persistentvolumeclaim "jenkins" created deploymentconfig.apps.openshift.io "jenkins" created serviceaccount "jenkins" created rolebinding.authorization.openshift.io "jenkins_edit" created service "jenkins-jnlp" created service "jenkins" created-- > Success Access your application via route 'jenkins-heroes.apps.itrunner.org' Run' oc status' to view your app.
Note: if the MEMORY_LIMIT configuration is low, the Jenkins Master node architecture is Linux (i386).
View Stora
After the installation is successful, you can view the storage from the node where Jenkins is installed:
$oc get pods-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEjenkins-1-hw5q5 1 o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEjenkins-1-hw5q5 1 Running 0 5m 10.131.0.26 app2.itrunner.org $oc get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEjenkins Bound pvc-bf3ff63d-6f1c-11e9-9dd9-02ef509f23d0 2Gi RWO glusterfs-storage 5m# mount | grep pvc-bf3ff63d-6f1c-11e9-9dd9-02ef509f23d010.188.12.116:vol_0e157791c95b65a94011aed789d2037b on / var/lib/origin/openshift.local.volumes/pods/c12d7625-6f1c-11e9-ad9d-02499a450338/volumes/kubernetes.io~glusterfs/63d-6f1c-11e9-9dd9-02ef509f23d0 type fuse.glusterfs (rw Relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) # cd / var/lib/origin/openshift.local.volumes/pods/c12d7625-6f1c-11e9-ad9d-02499a450338/volumes/kubernetes.io~glusterfs/63d-6f1c-11e9-9dd9-02ef509f23d0
Extended storage
When the original configured storage capacity does not meet the demand, the storage can be expanded.
The property allowVolumeExpansion of StorageClass must be set to true$ oc edit sc/glusterfs-storage# Please edit the object below. Lines beginning with a'# 'will be ignored,# and an empty file will abort the edit. If an error occurs while saving this file will be# reopened with the relevant failures.#allowVolumeExpansion: trueapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: creationTimestamp: 2019-04-28T02:58:06Z name: glusterfs-storage resourceVersion: "1903911" selfLink: / apis/storage.k8s.io/v1/storageclasses/glusterfs-storage uid: 723320e6-6961-11e9-b13d-02947d98b66eparameters: resturl: http://heketi-storage.app-storage.svc:8080 restuser: admin secretName: heketi-storage-admin-secret secretNamespace: app-storageprovisioner: kubernetes.io/glusterfsreclaimPolicy: DeletevolumeBindingMode: Immediate
View glusterfs-storage:
$oc describe sc glusterfs-storageName: glusterfs-storageIsDefaultClass: NoAnnotations: Provisioner: kubernetes.io/glusterfsParameters: resturl= http://heketi-storage.app-storage.svc:8080,restuser=admin,secretName=heketi-storage-admin-secret,secretNamespace=app-storageAllowVolumeExpansion: TrueMountOptions: ReclaimPolicy: DeleteVolumeBindingMode: ImmediateEvents: update PVC spec.resources.requests.storage$ oc edit pvc/jenkins# Please edit the object below. Lines beginning with a'# 'will be ignored,# and an empty file will abort the edit. If an error occurs while saving this file will be# reopened with the relevant failures.#apiVersion: v1kind: PersistentVolumeClaimmetadata: annotations: openshift.io/generated-by: OpenShiftNewApp pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs creationTimestamp: 2019-05-05T10:01:26Z finalizers:-kubernetes.io/pvc-protection labels: app: jenkins- Persistent template: jenkins-persistent-template name: jenkins namespace: heroes resourceVersion: "1904277" selfLink: / api/v1/namespaces/heroes/persistentvolumeclaims/jenkins uid: bf3ff63d-6f1c-11e9-9dd9-02ef509f23d0spec: accessModes:-ReadWriteOnce resources: requests: storage: 3Gi storageClassName: glusterfs-storage volumeName: pvc-bf3ff63d-6f1c-11e9-9dd9-02ef509f23d0.
Install / update plug-ins
Log in to jenkins using the openshift user.
Go to system Administration-> plug-in Management, install / update plug-ins. In order to compile Angular, you need to install the NodeJS plug-in (alternatively, you can use node agent).
Configure global tools
JDK
Maven
NodeJS
When called for the first time, Jenkins automatically installs the global tool, and the pipeline configuration is as follows:
Tools {jdk 'jdk8' maven' maven-3.6'} tools {nodejs' nodejs-10.15'}
Create a profile
The following is still the deployment of Spring Boot and Angular project as an example to introduce the use of Pipeline, before you begin, please rebuild the heroes project.
Deploy Spring Boot App to build Builder Image
Jenkins is responsible for compiling and packaging, so Maven is no longer required for Builder Image, which is modified as follows:
Dockerfile
# jdk8-centos7FROM centos:latestRUN yum-y update & & yum clean all# Set the labels that are used for OpenShift to describe the builder image.LABEL maintainer= "Sun Jingchuan"\ io.k8s.description= "Oracle JDK 1.8.0202based on CentOS 7"\ io.k8s.display-name= "Oracle JDK 1.8.000202"\ io.openshift.expose-services= "8080:http"\ io.openshift.tags= "jdk8" ENV JAVA_HOME=/ Usr/lib/jdk1.8.0_202\ APP_ROOT=/opt/app-rootENV PATH=$ {JAVA_HOME} / bin:$ {APP_ROOT} / bin:$ {PATH} HOME=$ {APP_ROOT} COPY lib/jdk1.8.0_202 ${JAVA_HOME} COPY bin ${APP_ROOT} / binRUN chmod-Ruuqix ${APP_ROOT} / bin & &\ chgrp-R 0 ${APP_ROOT} & &\ chmod-R grubu ${APP_ ROOT} / etc/passwdUSER 10001WORKDIR ${APP_ROOT} ENTRYPOINT ["uid_entrypoint"] EXPOSE 8080
Compile and upload builder image
This time we upload it to OpenShift Docker Registry. Note that you need to execute docker login before push.
# docker build-t jdk8-centos7. # docker tag jdk8-centos7:latest docker-registry-default.apps.itrunner.org/heroes/jdk8-centos7:latest# docker push docker-registry-default.apps.itrunner.org/heroes/jdk8-centos7:latest create App Image Dockerfile
App Image is based on builder image, and you only need to copy the jar compiled by Jenkins, as follows:
Dockerfile.app
# heroes-apiFROM heroes/jdk8-centos7:latestCOPY heroes-api-1.0.0.jar $HOMECMD java-jar $HOME/heroes-api-1.0.0.jar create a BuildConfig$ cat Dockerfile.app from Dockerfile | oc new-build-D-- name heroes-api
Some of the contents of generating BuildConfig after execution are as follows:
Spec: failedBuildsHistoryLimit: 5 nodeSelector: null output: kind: ImageStreamTag name: 'heroes-api:latest' postCommit: {} resources: {} runPolicy: Serial source: dockerfile: "FROM heroes/jdk8-centos7:latest\ r\ n\ r\ nCOPY heroes-api-1.0.0.jar $HOME\ r\ nCMD java-jar $HOME/heroes-api-1.0.0.jar" type: Dockerfile strategy: dockerStrategy : from: kind: ImageStreamTag name: 'jdk8-centos7:latest' type: Docker
Build will be launched automatically, but no jar will cause build to fail, and jar will be passed when pipeline is called.
Create Pipeline BuildConfig
Pipeline.yml
ApiVersion: v1kind: BuildConfigmetadata: name: heroes-api-pipelinespec: strategy: jenkinsPipelineStrategy: jenkinsfile:-pipeline {agent any tools {jdk 'jdk8' maven' maven-3.6'} stages {stage ("Clone Source") {steps {checkout ([$class: 'GitSCM') Branches: [[name:'* / master']], extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir:' heroes-api']] UserRemoteConfigs: [[url: 'https://github.com/sunjc/heroes-api.git']]])}} stage ("Build Backend") {steps {dir (' heroes-api') {sh 'mvn clean package-Pdev- Dmaven.test.skip=true'} stage ("Build Image") {steps {dir ('heroes-api/target') {sh' oc start-build heroes-api-- from-dir. -- follow'} type: JenkinsPipeline
Pipeline can be embedded in BuildConfig, or you can reference Jenkinsfile in git (recommended):
ApiVersion: v1kind: BuildConfigmetadata: name: heroes-api-pipelinespec: source: git: uri: "https://github.com/sunjc/heroes-api" strategy: jenkinsPipelineStrategy: jenkinsfilePath: Jenkinsfile type: JenkinsPipeline
Create a pipeline:
$oc create-f. / pipeline.yml
If Jenkins is not installed in the project, jenkins-ephemeral will be automatically deployed after pipeline is created.
Start pipeline$ oc start-build heroes-api-pipeline
It can also be started in jenkins or Application Console-> Builds-> Pipelines.
Deploy App$ oc new-app heroes-api$ oc create route edge heroes-api-- service heroes-api-- hostname heroes.apps.itrunner.org-- path / api-- insecure-policy Redirect deploy Angular App to build Builder Image
Dockerfile
# httpd-centos7FROM centos/httpd:latestRUN yum-y update & & yum clean allLABEL maintainer= "Sun Jingchuan"\ io.k8s.description= "Apache Httpd 2.4"\ io.k8s.display-name= "Apache Httpd 2.4"\ io.openshift.expose-services= "8080:http"\ io.openshift.tags= "httpd" ENV APP_ROOT=/opt/app-rootENV PATH=$ {APP_ROOT} / bin:$ {PATH} HOME=$ {APP_ROOT} HTTPD_ MAIN_CONF_PATH=/etc/httpd/confCOPY bin ${APP_ROOT} / binCOPY .s2i / bin/run ${APP_ROOT} / bin/runRUN chmod-Rubenx ${APP_ROOT} / bin & &\ chgrp-R 0 ${APP_ROOT} & &\ chmod-R galleu ${APP_ROOT} / etc/passwd / var/www/html / run/httpd & &\ chown-R root:root / run/httpd / etc/httpd &\ Sed-I-e "s / ^ user apache/User default/" ${HTTPD_MAIN_CONF_PATH} / httpd.conf & &\ sed-I-e "s / ^ group apache/Group root/" ${HTTPD_MAIN_CONF_PATH} / httpd.conf & &\ sed-I-e "s / listen 80/Listen 8080 /" ${HTTPD_MAIN_CONF_PATH} / httpd.conf & &\ sed-ri "s! ^ (\ s*CustomLog) \ s +\ Shop!\ 1 | / usrbind _ catenary S! ^ (\ s*ErrorLog)\ s +\ uid_entrypoint!\ 1 | / usr HTTPD_MAIN_CONF_PATH bind catenary; "${HTTPD_MAIN_CONF_PATH} / httpd.confUSER 10001WORKDIR ${APP_ROOT} ENTRYPOINT [" uid_entrypoint "] EXPOSE 8080
Compile and upload Builder Image
# docker build-t httpd-centos7. # docker tag httpd-centos7:latest docker-registry-default.apps.itrunner.org/heroes/httpd-centos7:latest# docker push docker-registry-default.apps.itrunner.org/heroes/httpd-centos7:latest create App Image Dockerfile
Dockerfile.app
# heroes-webFROM heroes/httpd-centos7:latestCOPY heroes/ var/www/html/heroesCMD $HOME/bin/run create BuildConfig$ cat Dockerfile.app from Dockerfile | oc new-build-D-name heroes-web
After execution, build is started, and no content is passed in at this time, causing build to fail.
Create Pipeline BuildConfig
Pipeline.yml
ApiVersion: v1kind: BuildConfigmetadata: name: heroes-web-pipelinespec: source: git: uri: "https://github.com/sunjc/heroes-web" strategy: jenkinsPipelineStrategy: jenkinsfilePath: Jenkinsfile type: JenkinsPipeline
Jenkinsfile
Pipeline {agent any tools {nodejs' nodejs-10.15'} stages {stage ("Clone Source") {steps {checkout ([$class: 'GitSCM', branches: [[name:' * / master']], extensions: [[$class: 'RelativeTargetDirectory', relativeTargetDir:' heroes-web']] UserRemoteConfigs: [[url: 'https://github.com/sunjc/heroes-web']]])} stage ("Build Angular") {steps {dir (' heroes-web') {sh 'npm install' sh' ng config-g cli.warnings.versionMismatch false' sh'ng build-- prod-- base-href=/heroes /'}} stage ("Build Image") {steps {dir ('heroes-web/dist') {sh' oc start-build heroes-web-- from-dir. -- follow'}
Create a pipeline:
$oc create-f. / pipeline.yml launches pipeline$ oc start-build heroes-web-pipeline deployment App$ oc new-app heroes-web$ oc create route edge heroes-web--service heroes-web--hostname heroes.apps.itrunner.org-- path / heroes\-- insecure-policy Redirect-- port 8080-tcp-n heroesOpenShift DSL
OpenShift Jenkins image installs the OpenShift Jenkins Client plug-in, which supports the use of OpenShift Domain Specific Language (DSL) to define pipeline.
Def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' def templateName =' nodejs-mongodb-example' pipeline {agent {node {label 'nodejs'}} options {timeout (time: 20) Unit: 'MINUTES')} stages {stage (' preamble') {steps {script {openshift.withCluster () {openshift.withProject () {echo "Using project: ${openshift.project ()}"}} } stage ('cleanup') {steps {script {openshift.withCluster () {openshift.withProject () {openshift.selector ("all") [template: templateName]. Delete () if (openshift.selector ("secrets", templateName). Exists () {openshift.selector ("secrets") TemplateName) .delete ()} stage ('create') {steps {script {openshift.withCluster () {openshift.withProject () {openshift.newApp (templatePath)} }} stage ('build') {steps {script {openshift.withCluster () {openshift.withProject () {def builds = openshift.selector ("bc") TemplateName) .related ('builds') timeout (5) {builds.untilEach (1) {return (it.object (). Status.phase = = "Complete")} Stage ('deploy') {steps {script {openshift.withCluster () {openshift.withProject () {def rm = openshift.selector ("dc") TemplateName) .rollout () .latest () timeout (5) {openshift.selector ("dc" TemplateName) .related ('pods') .untilEach (1) {return (it.object (). Status.phase = = "Running")}} stage (' tag') {steps {script {openshift .withCluster () {openshift.withProject () {openshift.tag ("${templateName}: latest" "${templateName}-staging:latest")}
Please refer to OpenShift Pipeline Builds for details.
Clean up object
With the continuous construction and deployment of applications, build, deployment, image and other objects will gradually increase, and administrators should regularly clean up objects that are no longer needed.
OpenShift provides oc adm prune commands to clean up objects, supporting auth, groups, builds, deployments, images, and so on.
Oc adm prune Pruning builds$ oc adm prune builds [] OptionDescription--confirmIndicate that pruning should occur, instead of performing a dry-run--orphansPrune all builds whose build config no longer exists, status is complete, failed, error, or canceled--keep-complete=NPer build config, keep the last N builds whose status is complete. (default 5)-keep-failed=NPer build config, keep the last N builds whose status is failed, error, or canceled (default 1)-keep-younger-than=durationDo not prune any object that is younger than duration relative to the current time. (default 60m) $oc adm prune builds-orphans-keep-complete=5-keep-failed=1-keep-younger-than=60m-confirmPruning deployments$ oc adm prune deployments [] OptionDescription--confirmIndicate that pruning should occur, instead of performing a dry-run--orphansPrune all deployments whose deployment config no longer exists, status is complete or failed, and replica count is zero--keep-complete=NPer deployment config, keep the last N deployments whose status is complete and replica count is zero. (default 5)-keep-failed=NPer deployment config, keep the last N deployments whose status is failed and replica count is zero. (default 1)-keep-younger-than=durationDo not prune any object that is younger than duration relative to the current time. (default 60m) Valid units of measurement include nanoseconds (ns), microseconds (us), milliseconds (ms), seconds (s), minutes (m), and hours (h) $oc adm prune deployments-orphans-keep-complete=5-keep-failed=1-keep-younger-than=60m-confirmPruning images$ oc adm prune images []
System:admin users cannot perform image cleanup operations, must be logged in with a normal user, and the user must have the system:image-pruner role.
Oc login https://openshift.itrunner.org:8443-- token=xxxxOptionDescription--allInclude images that were not pushed to the registry, but have been mirrored by pullthrough. This is on by default. To limit the pruning to images that were pushed to the integrated registry, pass-- all=false--certificate-authorityThe path to a certificate authority file to use when communicating with the OKD-managed registries. Defaults to the certificate authority data from the current user's configuration file. If provided, secure connection will be initiated--confirmIndicate that pruning should occur, instead of performing a dry-run. This requires a valid route to the integrated container image registry. If this command is run outside of the cluster network, the route needs to be provided using-- registry-url--force-insecureUse caution with this option. Allow an insecure connection to the Docker registry that is hosted via HTTP or has an invalid HTTPS certificate--keep-tag-revisions=NFor each image stream, keep up to at most N image revisions per tag. (default 3)-keep-younger-than=durationDo not prune any image that is younger than duration relative to the current time. Do not prune any image that is referenced by any other object that is younger than duration relative to the current time. (default 60m)-prune-over-size-limitPrune each image that exceeds the smallest limit defined in the same project. This flag cannot be combined with-keep-tag-revisions nor-keep-younger-than--registry-urlThe address to use when contacting the registry. The command will attempt to use a cluster-internal URL determined from managed images and image streams. In case it fails (the registry cannot be resolved or reached), an alternative route that works needs to be provided using this flag. The registry host name may be prefixed by https:// or http:// which will enforce particular connection protocol--prune-registryIn conjunction with the conditions stipulated by the other options, this option controls whether the data in the registry corresponding to the OKD Image API Objects is pruned. By default, image pruning processes both the Image API Objects and corresponding data in the registry. This options is useful when you are only concerned with removing etcd content, possibly to reduce the number of image objects, but are not concerned with cleaning up registry storage; or intend to do that separately by Hard Pruning the Registry, possibly during an appropriate maintenance window for the registry
Using-- keep-younger-than to clean up image does not clean up image in the following situations:
All Pod created in-- keep-younger-than all ImageStream running in-- keep-younger-than Pod status is pending all replication controllers all replication controllers all build configurations all builds
Use-- prune-over-size-limit to clean up the image that exceeds the specified Limit, and does not clean up the image in the following situations:
Pod all replication controllers all deployment configurations all build configurations all builds with a running Pod status of pending
Example:
Oc adm prune images-- keep-tag-revisions=3-- keep-younger-than=60m-- confirm$ oc adm prune images-- prune-over-size-limit-- confirm
Registry cache will not be updated after cleaning image. To clean up cache, you can redeploy registry:
$oc rollout latest dc/docker-registry-n defaultdocker prune
Docker related prune commands:
# docker container prune# docker image prune# docker volume prune
Use with caution, openshift/origin-docker-builder and openshift/origin-deployer will be deleted.
Recommended document
Pan Xiaohua Michael-OpenShift
Reference documentation
OKD Latest Documentation
Source-to-image
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.