Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes deployment (12): helm deployment harbor enterprise image repository

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Related content:

Kubernetes deployment (1): architecture and function description

Kubernetes deployment (2): initialization of system environment

Kubernetes deployment (3): CA certificate making

Kubernetes deployment (4): ETCD cluster deployment

Kubernetes deployment (5): Haproxy, Keppalived deployment

Kubernetes deployment (6): Master node deployment

Kubernetes deployment (7): Node node deployment

Kubernetes deployment (8): Flannel network deployment

Kubernetes deployment (IX): CoreDNS, Dashboard, Ingress deployment

Kubernetes deployment (X): stored glusterfs and heketi deployment

Kubernetes deployment (11): managed Helm and Rancher deployment

Kubernetes deployment (12): helm deployment harbor enterprise image repository

Introduction to harbor

Harbor official github: https://github.com/goharbor

Harbor is an enterprise-class Registry server for storing and distributing Docker images. Harbor extends open source Docker Distribution by adding features that users typically need, such as security, identity, and management. Making registry closer to the building and running environment can improve the efficiency of image transmission. Harbor supports image replication between registry and provides advanced security features such as user management, access control, and activity auditing.

Native registry of feature cloud: Harbour supports container images and Helm chart, and can be used as a registry for local cloud environments, such as container operation and business process platform. Role-based access control: users and repositories are organized by "projects", and users can have different permissions on the images under the project. Policy-based image replication: you can copy (synchronize) images between multiple registry instances based on policies with multiple filters (repositories, tags, and tags). If any errors are encountered, Harbor will automatically retry to copy. Ideal for load balancing, high availability, multi-data center, hybrid and multi-cloud scenarios. Vulnerability scanning: Harbor scans images periodically and warns users of vulnerabilities. LDAP / AD support: Harbor integrates with existing enterprise LDAP / AD for user authentication and management, and supports importing LDAP groups into Harbor and assigning them appropriate project roles. Image deletion and garbage collection: images can be deleted and their space can be reclaimed. Notarization: can ensure the authenticity of the mirror image. Graphical user Portal: users can easily browse, search repositories and manage projects. Audit: track all operations of the repository. RESTful API: RESTful API for most management operations, easy to integrate with external systems. Easy to deploy: online and offline installers are available. Prerequisites for Kubernetes cluster 1.10+helm 2.8.0+Harbor deployment 1. Add domain name resolution.

Parse the A record of h.cnlinux.club and n.cnlinux.club to my load balancer IP 10.31.90.200, which is used to create ingress.

two。 Download harbor's chart package [root@node-01 harbor] # wget https://github.com/goharbor/harbor-helm/archive/1.0.0.tar.gz-O harbor-helm-v1.0.0.tar.gz3. Modify the configuration file to extract the values.yaml file from the harbor-helm-v1.0.0.tar.gz file and place it in a directory at the same level as harbor-helm-v1.0.0.tar.gz.

Modify values.yaml. My configuration has modified the following fields:

It should be noted that if there is a storageclass in the K8s cluster, you can use storageclass directly. You can specify the storageclass name in several persistence.persistentVolumeClaim.XXX.storageClass, and multiple pvc will be created automatically. But here, in order to prevent the creation of multiple pvc to increase the difficulty of management, I created a pvc for all services under pvc,harbor before deployment. For more information on the role of each field, please see the official document https://github.com/goharbor/harbor-helm.

Expose.ingress.hosts.corexpose.ingress.hosts.notaryexternalURLpersistence.persistentVolumeClaim.registry.existingClaimpersistence.persistentVolumeClaim.registry.subPathpersistence.persistentVolumeClaim.chartmuseum.existingClaimpersistence.persistentVolumeClaim.chartmuseum.subPathpersistence.persistentVolumeClaim.jobservice.existingClaimpersistence.persistentVolumeClaim.jobservice.subPathpersistence.persistentVolumeClaim.database.existingClaimpersistence.persistentVolumeClaim.database.subPathpersistence.persistentVolumeClaim.redis.existingClaimpersistence.persistentVolumeClaim.redis.subPathexpose: type: ingress tls: enabled: true secretName: "" notarySecretName: "" commonName: "" ingress: hosts: core: h.cnlinux Club notary: n.cnlinux.club annotations: ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-body-size: "0" clusterIP: name: harbor ports: httpPort: 80 httpsPort: 443 notaryPort: 4443 nodePort: Name: harbor ports: http: port: 80 nodePort: 30002 https: port: 443 nodePort: 30003 notary: port: 4443 nodePort: 30004externalURL: https://h.cnlinux.clubpersistence: enabled: true resourcePolicy: "keep" persistentVolumeClaim: registry: existingClaim: "pvc-harbor" storageClass: "subPath:" registry "accessMode: ReadWriteOnce size: 5Gi chartmuseum: existingClaim: "pvc-harbor" storageClass: "" subPath: "chartmuseum" accessMode: ReadWriteOnce size: 5Gi jobservice: existingClaim: "pvc-harbor" storageClass: "" subPath: "jobservice" accessMode: ReadWriteOnce size: 1Gi database: existingClaim: "pvc-harbor" storageClass: "subPath:" database "accessMode: ReadWriteOnce size: 1Gi redis: existingClaim:" pvc-harbor "storageClass:"subPath:" redis "accessMode: ReadWriteOnce size: 1Gi imageChartStorage: type: filesystem filesystem: rootdirectory: / storageimagePullPolicy: IfNotPresentlogLevel: debugharborAdminPassword:" Harbor12345 "secretKey:" not-a-secure-key "nginx: image: repository: goharbor/nginx-photon tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} portal: image: repository: goharbor/harbor-portal tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} core: image: repository: goharbor/harbor-core tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} adminserver: Image: repository: goharbor/harbor-adminserver tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} jobservice: image: repository: goharbor/harbor-jobservice tag: v1.7.0 replicas: 1 maxJobWorkers: 10 jobLogger: file nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} registry: registry: image: repository: goharbor/registry-photon Tag: v2.6.2-v1.7.0 controller: image: repository: goharbor/harbor-registryctl tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} chartmuseum: enabled: true image: repository: goharbor/chartmuseum-photon tag: v0.7.1-v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: { } clair: enabled: true image: repository: goharbor/clair-photon tag: v2.0.7-v1.7.0 replicas: 1 httpProxy: httpsProxy: updatersInterval: 12 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} notary: enabled: true server: image: repository: goharbor/notary-server-photon tag: v0.6.1-v1.7.0 replicas: 1 signer: image: repository : goharbor/notary-signer-photon tag: v0.6.1-v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} database: type: image: repository: goharbor/harbor-db tag: v1.7.0 password: "changeit" nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} redis: type: Internal internal: image: repository: goharbor/redis-photon tag: v1.7.0 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} 4. Create a storage volum

Because harbor needs to use mysql, in order to prevent data loss caused by mysql during scheduling, we need to store mysql data in gluster storage volumes.

[root@node-01 harbor] # vim pvc-harbor.yaml apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-harborspec: storageClassName: gluster-heketi accessModes:-ReadWriteMany resources: requests: storage: 50GiGi[ root @ node-01 harbor] # kubectl apply-f pvc-harbor.yaml 5. Install Harbo [root @ node-01 harbor] # helm install-- name harbor harbor-helm-v1.0.0.tar.gz-f values.yaml

If the installation is not successful, you can use helm del-- purge harbor to delete and reinstall.

6. Demo

After a period of time, you can see that all the related pod of harbor is running, and we can access it. The default user password is admin/Harbor12345. You can change the default user name and password by modifying values.yaml.

[root@node-01 ~] # kubectl get podNAME READY STATUS RESTARTS AGEharbor-harbor-adminserver-7fffc7bf4d-vj845 1 15dharbor-harbor-core 1 Running 1 15dharbor-harbor-chartmuseum-bdf64f899-brnww 1 Running 0 15dharbor-harbor-clair-8457c45dd8-9rgq8 1 Running 1 15dharbor-harbor-core -7fc454c6d8-b6kvs 1 to 1 Running 1 15dharbor-harbor-database-0 1 to 1 Running 0 15dharbor-harbor-jobservice-7895949d6b-zbwkf 1 to 1 Running 1 15dharbor-harbor-notary-server-57dd94bf56-txdkl 1 to 15dharbor-harbor-notary-server-57dd94bf56-txdkl 1 Running 0 15dharbor-harbor-notary-signer-5d64c5bf8d-kppts 1 to 1 Running 0 15dharbor-harbor-portal-648c56499f-g28rz 1/1 Running 0 15dharbor-harbor-redis-0 1/1 Running 0 15dharbor-harbor-registry-5cd9c49489-r92ph 2/2 Running 0 15d

Next we create a private project for test to test.

Because the harbor repository we created belongs to https, we need to add the certificate to the configuration directory corresponding to docker before docker pull or push image, otherwise docker cannot log in to harbor. Enter the test project and click "Registration Certificate" to download harbor's CA certificate.

Create a directory on each node node (the image may be uploaded in master later, so my master node has also created it this time) for n in `seq-w 01 06`; do ssh node-$n "mkdir-p / etc/docker/certs.d/h.cnlinux.club"; done# copies the downloaded harbor CA certificate to for n in `seq-w 01 06` under the etc/docker/certs.d/h.cnlinux.club directory of each node node Do scp ca.crt node-$n:/etc/docker/certs.d/h.cnlinux.club/;done logs in to harbor on the node node, and the information after successful login is saved in .docker / config.json in the current user's home directory. [root@node-06 ~] # docker login h.cnlinux.clubUsername: adminPassword: Login suceded[ root @ node-06 ~] # cat .docker / config.json {"auths": {"h.cnlinux.club": {"auth": "YWRtaW46SGFyYm9yMTIzNDU="} pull a nginx image in the official docker warehouse, and any tag,push is sent to the harbor warehouse. As follows, you can see that there is already a nginx image under harbor's test project [root@node-06 ~] # docker pull nginx: latest [root @ node-06 ~] # docker tag nginx:latest h.cnlinux.club/test/nginx: latest [root @ node-06 ~] # docker push h.cnlinux.club/test/nginx:latest

Question: if there are a lot of node nodes in my K8s cluster, does every node node have to log in to pull harbor the image of the repository? Is this very troublesome?

In fact, there is a type of secret in K8s that kubernetes.io/dockerconfigjson is used to solve this problem. First, convert the login information of docker into base64 format [root@node-06 ~] # cat .docker / config.json | base64ewoJImF1dGhzIjogewoJCSJoLmNubGludXguY2x1YiI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wNi4xLWNlIChsaW51eCkiCgl9Cn0= to create secretapiVersion: v1kind: Secretmetadata:name: harbor-registry-secretnamespace: defaultdata:.dockerconfigjson: ewoJImF1dGhzIjogewoJCSJoLmNubGludXguY2x1YiI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wNi4xLWNlIChsaW51eCkiCgl9Cn0=type: kubernetes.io/ dockerconfigjson [root @ node-01 ~] # kubectl create-f harbor-registry-secret.yaml secret/harbor-registry-secret created to create nginx demo and use nginx image on harbor. And parse the nginx.cnlinux.club to the load balancer 10.31.90.200. ApiVersion: apps/v1kind: Deploymentmetadata: name: deploy-nginx labels: app: nginxspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginxspec: containers:-name: nginx image: h.cnlinux.club/test/nginx:latest ports:-containerPort: 80 imagePullSecrets:-name: harbor-registry-secret---apiVersion : v1kind: Servicemetadata: name: nginxspec: selector: app: nginx ports:-name: nginx protocol: TCP port: 80 targetPort: 80 type: ClusterIP-apiVersion: extensions/v1beta1kind: Ingressmetadata: name: nginx annotations: # nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginxspec: rules:-host: nginx.cnlinux.club http: paths:-path: Backend: serviceName: nginx servicePort: 80 you can see that the nginx on the 3-over node node is already running It proves that the image on harbor has been successfully pull down. [root@node-01 ~] # kubectl get pod-o wide | grep nginx deploy-nginx-647f9649f5-88mkt 1 node-04 deploy- 1 Running 0 2m41s 10.34.0.5 node-06 deploy-nginx-647f9649f5-9z842 1 Running 0 2m41s 10.40.0.5 node-04 deploy- Nginx-647f9649f5-w44ck 1/1 Running 0 2m41s 10.46.0.6 node-05

Finally, we visited http://nginx.cnlinux.club, and everything is done.

Follow-up will update all the K8s-related documents one after another, if you think I wrote well, I hope you pay more attention to like, if you have any questions, you can leave me a message below, thank you very much!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report