Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to quickly deploy Harbor to Kubernetes Cluster

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how to quickly deploy Harbor to the Kubernetes cluster". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to quickly deploy Harbor to the Kubernetes cluster".

1. Quick installation

Harbor can support both container deployment and Kubernetes deployment. Starting with Harbor 1.6, the project provides a cluster deployment method for Kubernetes, which can be deployed quickly using Helm (https://github.com/goharbor/harbor-helm) and dynamically scheduled based on Kubernetes cluster management mechanism. As follows:

Git clone https://github.com/goharbor/harbor-helmcd harbor-helmgit checkout masterhelm install-- namespace harbor--name harbor. 2, deployment skills

However, for a multi-node Kubernetes cluster, there are several issues that need to be addressed:

Image download. Because Harbor uses as many as 10 container images (of which registry will use multiple container images), it will be scheduled to run on multiple nodes by the cluster. It is necessary to ensure that all nodes have required container images, which will bring a large amount of download traffic and take a long time to run completely. It is best to download it on one node and upload it to all nodes.

Network storage. In the Kubernetes cluster, pod can be drifted, so it is necessary to ensure that the drifted container can still access the storage continuously.

Landing problem. Kubernetes provides a variety of service exposure methods, but at present, the authorization of NodePort and other services will be inconsistent with the backend service, resulting in login failure and prompting "incorrect user name or password".

2.1 Image download

Use the following script to pre-download the image:

Echo "Pull images for Harbor:dev" echo "docker pull goharbor/harbor-core:dev # docker pull goharbor/harbor-portal:dev # docker pull goharbor/harbor-jobservice:dev # docker pull goharbor/clair-photon:dev # docker pull goharbor/notary-server-photon:dev # docker pull goharbor/notary-signer-photon:dev # docker pull goharbor/registry-photon:dev # docker pull goharbor/harbor-registryctl:dev # docker pull goharbor/chartmuseum-photon:dev # docker pull goharbor/harbor-db:dev # echo" Finished. "

Run the installation command for Helm (namespace harbor):

Helm install-namespace harbor-name harbor.

View the installed pod and run:

Kubectl get pod-n harbor

The running pod instance is as follows:

NAME READY STATUS RESTARTS AGEharbor-harbor-chartmuseum-5d8895d9dc-c76mx 1 Running 1 9hharbor-harbor-clair-7995586c44-8p98g 1 Running 1 9hharbor-harbor-core-9999c79ff-db2fl 1 9hharbor-harbor-database-0 1 Running 0 1/1 Running 0 9hharbor-harbor-jobservice-65f6dbdc78-h82nb 1/1 Running 1 9hharbor-harbor-notary-server-77774bb46f-jzsgx 1/1 Running 2 9hharbor-harbor-notary-signer-5c94f5844c-8gpp8 1/1 Running 2 9hharbor-harbor-portal-85dbb47c4f-xbnzz 1/1 Running 0 9hharbor-harbor-redis-0 1 Running 0 9hharbor-harbor-registry-b8bd76fc7-744fs 2 Running 0 9h

However, after installation, many of the above pod failed due to storage and login problems. What I show here is the operation of pod after solving these problems.

2.2 Network Stora

Harbor can use local storage, external storage, or network storage.

Local Stora

If you use local storage, you need to specify that the Harbor service pod runs on the node where the storage is located (or a single-node Kubernetes cluster).

Refer to https://github.com/openthings/kubernetes-tools/harbor/hostpath for specific configuration files. An example of redis is given below:

Create a storage pv profile for Redis:

ApiVersion: v1kind: PersistentVolumemetadata: name: data-harbor-harbor-redis-0 namespace: harborspec: capacity: storage: 8Gi accessModes:-ReadWriteMany hostPath: path: / home/supermap/harbor/data-harbor-harbor-redis-0

Create a storage pvc profile for Redis:

Kind: PersistentVolumeClaimapiVersion: v1metadata: name: data-harbor-harbor-redis-0 namespace: harborspec: accessModes:-ReadWriteMany resources: requests: storage: 8Gi Network Storage

I am here a pvc created directly from GlusterFS, and you can also use network accessible storage such as NFS or Ceph, including systems such as NAS/iSCSI/IPSAN with good performance.

First create endpoint, then create pv and pvc.

Similar to local storage, but can provide network-wide portable access to pod.

For more information on the configuration file, please see https://github.com/openthings/kubernetes-tools/harbor/zettariver.

According to the recommended method, you can specify the parameters of the storage device through helm install, or modify the value.yaml file to specify how the storage is used. I did not modify the original parameters here. When the pod is created, delete all the pvc, recreate it, and then wait for the pod to restart and run successfully. The script is as follows:

Echo "Delete pvc..." kubectl delete-n harbor pvc/data-harbor-harbor-redis-0kubectl delete-n harbor pvc/database-data-harbor-harbor-database-0kubectl delete-n harbor pvc/harbor-harbor-chartmuseumkubectl delete-n harbor pvc/harbor-harbor-jobservicekubectl delete-n harbor pvc/harbor-harbor-registryecho "echo" Create pvc... "kubectl apply-f 0a-glusterfs-gvzr00-endpoint.yamlkubectl apply-f 0b-glusterfs-gvzr00-service.yamlkubectl apply-f 1a Murray Harbor-redis-0.yamlkubectl apply-f 1b-pvc-data-harbor-harbor-redis-0.yamlkubectl apply-f 2a-pv-database-data-harbor-harbor-database-0.yamlkubectl apply-f 2b-pvc-database-data-harbor-harbor-database-0.yamlkubectl apply-f 3a-pv-harbor-harbor-chartmuseum.yamlkubectl apply-f 3b-pvc-harbor-harbor-chartmuseum.yamlkubectl apply-f 4a-pv-harbor-harbor-jobservice.yamlkubectl apply-f 4b-pvc-harbor-harbor-jobservice. Yamlkubectl apply-f 5a-pv-harbor-harbor-registry.yamlkubectl apply-f 5b-pvc-harbor-harbor-registry.yamlecho "Finished."

A record of several questions:

Use the endpoint method to access glusterfs, and some of the pod is always unsuccessful.

Reference: volume where GlusterFS is mounted in Kubernetes

Use the mount glusterfs volume as the local volume, and then create a pvc for harbor to use (you need to set nodeSelector as a fixed node).

Reference: installation of the latest version of distributed storage system GlusterFS

However, with the exception of the container harbor-harbor-database-0, it must be stored on a non-GlusterFS storage device. The reasons are as follows.

After testing, the container harbor-harbor-database-0 cannot be deployed on the GlusterFS volume, and the data pvc is database-data-harbor-harbor-database-0.

PostgreSQL error, https://stackoverflow.com/questions/46852123/error-in-performance-test-postgresql-and-glusterfs

GlusterFS does not support structured data, https://docs.gluster.org/en/latest/Install-Guide/Overview/#is-gluster-going-to-work-for-me-and-what-i-need-it-to-do

Because GlusterFS does not support structured data for PostgreSQL:

Temporary solution: place the database-data-harbor-harbor-database-0 volume on the physical disk and deploy the container harbor-harbor-database-0 on the same node.

The heketi deployment glusterfs method has not been tested yet.

Reference: fast application of GlusterFS- dynamic volumes

Using NFS to access and deploy haorbor, or NFS service interface of glusterfs to deploy harbor, has not been tested yet.

Reference: client access and NFS settings for GlusterFS

The following is a list of the 5 pvc virtual storage volumes currently used by Harbor for storage usage after normal operation:

Kubectl get pvc-n harborNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdata-harbor-harbor-redis-0 Bound data-harbor-harbor-redis-0 8Gi RWX 9hdatabase-data-harbor-harbor-database-0 Bound database-data -harbor-harbor-database-0 16Gi RWX 9hharbor-harbor-chartmuseum Bound harbor-harbor-chartmuseum 8Gi RWX 9hharbor-harbor-jobservice Bound harbor-harbor-jobservice 8Gi RWX 9hharbor-harbor -registry Bound harbor-harbor-registry 8Gi RWX 9h

"if pv/pvc fails in Kubernetes, you need to check that the storage capacity and path are set correctly, and modify it to ensure that the storage is fully available."

2.3 Landing problem

The default installation parameters of the system are to use Ingress to provide the service entry. Although it is stated in the document that nodePort and LoadBalancer can be used to set the type of service harbor-harbor-portal, in practice, it is found that after entering the account admin and password Harbor12345, it will always prompt "the account will have an incorrect password", resulting in unable to log in to the system normally.

For problem reports and solutions, see:

Invalid user name or password, https://github.com/goharbor/harbor-helm/issues/75

I use Ingress (which needs to be installed in advance) to provide services, and I also need to modify the rules of Ingress.

To install Ingress, execute:

For scripts and configurations, see: github.com/openthings/kubernetes-tools/ingress

Helm install. / nginx-ingress-name nginx-ingress\-namespace ingress\-set controller.stats.enabled=true

In Kubernetes's Dashboard, select "Service Discovery and load balancing", select the Ingress rule in "access Rights", and click "Edit".

Modify the host to your own host domain, as follows:

"rules": [{"host": "localhost", "http": {"paths": [{"path": "/", "backend": {"serviceName": "harbor-harbor-portal", "servicePort": 80} {"path": "/ api/", "backend": {"serviceName": "harbor-harbor-core", "servicePort": 80}}, {"path": "/ service/" "backend": {"serviceName": "harbor-harbor-core", "servicePort": 80}}, {"path": "/ v2 /", "backend": {"serviceName": "harbor-harbor-core" "servicePort": 80}}, {"path": "/ chartrepo/", "backend": {"serviceName": "harbor-harbor-core", "servicePort": 80} {"path": "/ c /", "backend": {"serviceName": "harbor-harbor-core", "servicePort": 80}}]}}

Note:

The host above me is set to localhost, which can only be accessed locally, and cannot be accessed by other machines.

You can set it as an external domain name or add another machine name to / etc/hosts, or an IP address to provide external access.

The IP is ingress node.

3. Docker client access

First of all, from the Harbor management interface "system Management"-"configuration Management"-"system Settings", select "Image Library Root Certificate"-"download", and save the downloaded file to the client that needs to access the Harbor service. Then, you need to complete the Docker certificate configuration, log in to the Harbor server, and you can push / download your own image.

On the client:

Copy the ca.crt to the / etc/docker/certs.d/yourdomain.com of the docker client (IP of the registry server). For example:

# get the ca.crt file. If the directory does not exist, you need to create it manually in advance. Sudo scp user@192.168.1.8:~/docker/ca.crt / etc/docker/certs.d/192.168.1.8/

Restart Docker.

Sudo systemctl restart docker

Rename the container image using docker tag. Such as:

Docker tag goharbor/harbor-portal:dev core.harbor.domain/library/harbor-portal:dev

Log in to the Harbro server from the command line (enter account number: admin/Harbor12345, or self-built account):

Docker login core.harbor.domain

Push the image to the Harbor server:

Docker push core.harbor.domain/library/harbor-portal:dev

For detailed configuration of HTTPS services and clients, please refer to: https://my.oschina.net/u/2306127/blog/785281

Thank you for reading, the above is the content of "how to quickly deploy Harbor to Kubernetes cluster". After the study of this article, I believe you have a deeper understanding of how Harbor can be quickly deployed to Kubernetes cluster, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report