In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Arbor is re-packaged by VMWare on the basis of Docker Registry, adds a lot of extra programs, and provides a very beautiful web interface.
Description
In the previous article, "Enterprise installation and configuration of Harbor Image Management system", we briefly explained the configuration of the stand-alone version of harbor. However, this stand-alone deployment obviously can not meet the needs in production, so it is necessary to ensure the high availability of applications.
There are two mainstream solutions to this problem:
Double master replication
Multiple harbor instances share back-end storage
Double master replication master-slave synchronization
Harbor officially provides a master-slave replication solution to solve the image synchronization problem by default. Through replication, we can synchronize the images of the test environment harbor repository to the production environment harbor in real time, similar to the following process:
In the actual production operation and maintenance, it is often necessary to publish the image to dozens or hundreds of cluster nodes. At this time, a single Registry can no longer meet the download needs of a large number of nodes, so it is necessary to configure multiple Registry instances for load balancing. Manually maintaining images on multiple Registry instances will be a tedious task. Harbor can support one-master and multi-slave image publishing mode, which can solve the problem of large-scale image publishing:
As long as it is released to one Registry, the image is synchronized to multiple Registry like "Fairy scattered Flowers", which is efficient and reliable.
If it is a cluster with a wide geographical distribution, you can also adopt a hierarchical publishing method, such as synchronizing from the group headquarters to the provincial company, and from the provincial company to the municipal company:
However, master-slave synchronization alone can not solve the single point problem of harbor master node.
Two-master replication description
In fact, the so-called double master replication is to multiplex master-slave synchronization to realize two-way synchronization between two harbor nodes to ensure data consistency, and then a load balancer at the top of the two harbor front ends diverts the incoming requests to different instances. As long as there is a new mirror in one instance, it automatically replicates synchronously to another instance, thus realizing load balancing and avoiding a single point of failure. The high availability of Harbor is achieved to some extent:
One problem with this scenario is that the data in the two Harbor instances may be inconsistent. Suppose that if an instance A dies and a new mirror comes in at this time, the new image will be in another instance B. even if the failed An instance is restored, Harbor instance B will not automatically synchronize the mirror. In this way, you can only manually turn off the replication policy of Harbor instance B, and then enable the replication policy, so that the data of instance B can be synchronized and the data of the two instances are consistent.
In addition, I need to complain that in the actual production and use, master-slave replication is very unreliable.
Therefore, the scheme described below is recommended here.
Description of back-end storage solution for sharing multiple harbor instances
Sharing backend storage is a standard solution, that is, multiple Harbor instances share the same backend storage, and any instance persisted to the stored image can be read by other instances. Requests coming in from LB can be diverted to different instances for processing, thus achieving load balancing and avoiding single point of failure:
There are three issues to consider when this scenario is deployed in a real production environment:
As for the selection of shared storage, the back-end storage of Harbor currently supports AWS S3, Openstack Swift, Ceph and so on. In our experimental environment, we directly use nfs.
Session is shared on different instances, which is no longer a problem. In the latest harbor, the default session is stored in redis, so we just need to separate the redis. The availability of redis can be ensured by means of redis sentinel or redis cluster. In our experimental environment, a single redis is still used.
Harbor multi-instance database problem, this only needs to split out the database in harbor and deploy it independently. Let multiple instances share an external database, and the high availability of the database can also be guaranteed by the high availability scheme of the database.
Environment description Experimental Environment ip role192.168.198.133 harbor192.168.198.135 harbor192.168.198.136 redis, mysql, nfs
It should be emphasized that the configuration of load balancer is not included in our environment. Please refer to the relevant documentation on the configuration of load balancer.
Configuration instructions install nfs# install nfsapt install nfs-kernel-server nfs-common# edit / etc/exports file / data * (rw,no_root_squash) chmod 777-R / datasystemctl start nfs-server install redis and mysql
Here we will install it directly through docker. The docker-compose.yml file is as follows:
Version: '3'services: mysql-server: hostname: mysql-server container_name: mysql-server image: mysql:5.7 network_mode: host volumes:-/ mysql57/data:/var/lib/mysql command:-- character-set-server=utf8 environment: MYSQL_ROOT_PASSWORD: 123456 redis: hostname: redis-server container_name: redis-server image: redis:3 network_mode: host launch docker-compose up-d to import registry database
Once mysql is configured, you also need to import the harbor registry library into the mysql database. In "Enterprise installation and configuration Harbor Image Management system", we installed a stand-alone version of harbor, launched a mysql with a registry database in it, exported it directly, and then imported it into the new database:
# Export database:
Docker exec-it harbor_db / bin/bashmysqldump-uroot-p-- databases registry > registry.dump
# copy the registry.dump on the host
Docker cp harbor_db:/registry.dump. /
# copy the registry.dump on the host to a separate mysql container
Docker cp. / registry.dump:/registry.dump
# the registry database will be imported in a separate mysql container
Docker exec-it/bin/bashmysql-uroot-pmysql > source / registry.dump configure harbor to mount the nfs directory
Mount the nfs directory on the harbor node:
Mount-t nfs 192.168.198.136:/data / data modify harbor.cfg configuration
On the harbor node, download the harbor installation package, generate a self-signed certificate, and modify the prepare file. You can directly refer to "Enterprise installation and configuration of Harbor Image Management system". The difference is that the harbor.cfg file needs to modify the database and redis configuration as follows:
Db_host = 192.168.198.136db_password = 123456db_port = 3306db_user = rootredis_url = 192.168.198.136 123456db_port 6379 modify docker-compose.yml configuration
Compared with the stand-alone harbor, the cluster configuration no longer needs to start mysql and redis, so docker-compose.yml also needs to be modified accordingly. In fact, in the harbor installation directory, there is a ha directory that already provides the docker-compose.yml files we need, just copy it out. In fact, the configuration of keepalived when using lvs as a load balancer is also provided in this directory.
Cp ha/docker-compose.yml./prepare./install.sh
After the installation is completed on the two harbor nodes, we can verify the load balancing effect of the two nodes by binding the hosts to different nodes.
The original text is from: https://www.linuxprobe.com/harbor-high-availability.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.