In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Build a private warehouse for Harbor
At this point, start a new virtual machine: CentOS 7-2 192.168.18.134 (the network card can be set to static IP)
`deploy docker engine `[root@harbor ~] # yum install yum-utils device-mapper-persistent-data lvm2-y [root@harbor ~] # yum-config-manager-- add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo[root@harbor ~] # yum install-y docker-ce [root@harbor ~] # systemctl stop firewalld.service [root@harbor ~] # setenforce 0 [root@harbor ~] # systemctl start docker.service [root@harbor ~] # systemctl enable docker.service` check whether the relevant processes are enabled `[root@harbor ~] # ps aux | grep dockerroot 4913 0.83.6 565612 68884? Ssl 12:23 0:00 / usr/bin/dockerd-H fd://-- containerd=/run/containerd/containerd.sockroot 5095 0.0 112676 984 pts/1 R+ 12:23 0:00 grep-- color=auto docker`Image acceleration service `[root@harbor ~] # tee / etc/docker/daemon.json / etc/sysctl.conf [root@harbor ~] # service network restartRestarting network (via systemctl): [OK] [root@harbor ~] # systemctl restart docker- [root@harbor ~] # mkdir / aaa [root@harbor ~] # mount.cifs / / 192.168.0.105/rpm / aaaPassword for root@//192.168.0.105/rpm: [root@harbor ~] # cd / aaa/docker/ [root@harbor docker] # cp docker-compose / usr/local/bin/ [root@harbor docker] # cd / usr/local/bin / [root@harbor bin] # lsdocker-compose [root@harbor bin] # docker-compose-vdocker-compose version 1.21.1 Build 5a3f1a3 [root@harbor bin] # cd / aaa/docker/ [root@harbor docker] # tar zxvf harbor-offline-installer-v1.2.2.tgz-C / usr/local/ [root@harbor docker] # cd / usr/local/harbor/ [root@harbor harbor] # lscommon docker-compose.yml harbor.v1.2.2.tar.gz NOTICEdocker-compose.clair.yml harbor_1_1_0_template install.sh Preparedocker-compose.notary.yml harbor.cfg LICENSE upgrade` configure Harbor parameter file `[root@harbor harbor] # vim harbor.cfg5 hostname = 192.168.18.134 # 5 change to your local IP address 59 harbor_admin_password = Harbor12345 # this behavior default account and password do not forget When logging in, use # to exit the insert mode by pressing Esc after modification, and enter: wq save exit [root@harbor harbor] # / install.sh. Multiple lines of Creating harbor-log are omitted here. DoneCreating harbor-adminserver... DoneCreating harbor-db... DoneCreating registry... DoneCreating harbor-ui... DoneCreating nginx... DoneCreating harbor-jobservice... Done ✔-Harbor has been installed and started successfully.----Now you should be able to visit the admin portal at http://192.168.18.134.For more details, please visit https://github.com/vmware/harbor. Step 1: log in to the Harbor private warehouse
Enter 192.168.18.134 in the address bar of the host browser, enter the default account admin and password Harbor12345, and you can click to log in.
Step 2: create a new project and make it private
Click "+ Project" in the project interface to add a new project, enter the project name, click create, and then click the three dots on the left side of the new project to make the project private.
Two node nodes are configured to connect to the private repository (note the comma to be added later) `node2 node `[root@node2 ~] # vim / etc/docker/daemon.json {"registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"], # at the end "insecure-registries": ["192.168.18.134"] # add this line} [root@node2 ~] # systemctl restart docker`node2 Node `[root@node1 ~] # vim / etc/docker/daemon.json {"registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"], # must end with "insecure-registries": ["192.168.18.134"] # add this line} [root@node1 ~] # systemctl restart docker step 3: log in to the harbor private warehouse on the node`node2 node: `[root@node2 ~] # docker login 192.168.18.134Username: admin # enter the account adminPassword: # enter password: Harbor12345WARNING! Your password will be stored unencrypted in / root/.docker/config.json.Configure a credential helper to remove this warning. At this time, See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded # successfully logged in to download the tomcat image and labeled it to push: ``[root@node2 ~] # docker pull tomcat. Multiple lines Status: Downloaded newer image for tomcat:latestdocker.io/library/tomcat:latest [root@node2 ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEtomcat latest aeea3708743f are omitted here 3 days ago 529MB [root@node2 ~] # docker tag tomcat 192.168.18.134/project/tomcat # process of tagging [root@node2 ~] # docker push 192.168.18.134/project/tomcat # upload an image. You can see the pushed tomcat image on the interface of harbor private warehouse.
Problem: if we want to use another node node1 to pull the tomcar image in the private warehouse, an error error will occur, indicating that it is rejected (that is, you need to log in) [root@node1 ~] # docker pull 192.168.18.134/project/tomcatUsing default tag: latestError response from daemon: pull access denied for 192.168.18.134/project/tomcat, repository does not exist or may require 'docker login': denied: requested access to the resource is denied # error Missing repository credential `node1 node downloads tomcat image `[root@node1 ~] # docker pull tomcat:8.0.52 [root@node1 ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEtomcat 8.0.52 b4b762737ed4 19 months ago 356MB step 4: operate on master1 [root@master1 demo] # vim tomcat01.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: my-tomcatspec: replicas: 2 template: metadata: labels: app: my-tomcatspec: containers:-name: my-tomcat image: docker.io/tomcat:8.0.52 ports: -containerPort: 80---apiVersion: v1kind: Servicemetadata: name: my-tomcatspec: type: NodePort ports:-port: 8080 targetPort: 8080 selector: app: my- tomcat` create `[root@master1 demo] # kubectl create-f tomcat01.yamldeployment.extensions/my-tomcat createdservice/my-tomcat created` View Resource` [root@master1 demo] # kubectl get pods Deploy SvcNAME READY STATUS RESTARTS AGEpod/my-nginx-d55b94fd-kc2gl 1 Running 1 2dpod/my-nginx-d55b94fd-tkr42 1 Running 1 2d`pod / my-tomcat-57667b9d9- 8bkns`1According to 1 Running 0 84s`pod / my-tomcat-57667b9d9- kcddv`1Compact 1 Running 0 84spod/mypod 1/1 Running 1 8hpod/nginx-6c94d899fd-8pf48 1/1 Running 1 3dpod/nginx-deployment-5477945587-f5dsm 1/1 Running 1 2d23hpod/nginx-deployment-5477945587-hmgd2 1/1 Running 1 2d23hpod/nginx-deployment-5477945587-pl2hn 1/1 Running 1 2d23hNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.extensions/my-nginx 2 2 2 2d`deployment.extensions / my- tomcat` 2 2 2 84sdeployment.extensions/nginx 1 1 1 1 8ddeployment.extensions/nginx-deployment 3 3 3 2d23hNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/kubernetes ClusterIP 10.0.0.1 443/TCP 10dservice/my-nginx-service NodePort 10.0.0.210 80:40377/TCP 2d`service / my-tomcat NodePort 10.0.0.86 8080:41860/TCP 84s`service / nginx-service NodePort 10.0.0.242 80:40422/TCP 3d10h# Internal Port 8080 External port 41860 [root@master1 demo] # kubectl get epNAME ENDPOINTS AGEkubernetes 192.168.18.128 kubectl get epNAME ENDPOINTS AGEkubernetes 6443192.168.18.132purl 6443 10dmy-nginx-service 172.17.32.4 10dmy-nginx-service 172.17.40.3 10dmy-nginx-service 802d`my-tomcat 172.17.32.6 tomcat 8080172.17.40.6 8080 5m29s`nginx -service 172.17.40.5 3d10h# 3d10h# at this time the my-tomcat is assigned to the last two nodes to verify: enter 192.168.18.148 my-tomcat 41860 and 192.168.18.145 my-tomcat 41860 in the host browser plus the exposed port number Check to see if you can all access tomcat's home page.
`After verifying that the access is successful, we will delete the resource first. Later, use the image in the private repository to create `[root@master1 demo] # kubectl delete-f tomcat01.yamldeployment.extensions "my-tomcat" deletedservice "my-tomcat" deleted: `If you encounter a resource that cannot be deleted in the Terminating status` [root@localhost demo] # kubectl get podsNAME READY STATUS RESTARTS AGEmy-tomcat-57667b9d9-8bkns 1amp 1 `Terminating`0 In this case, you can use the forced deletion command `format: kubectl delete pod [pod name]-- force-- grace-period=0-n [namespace] `[root@localhost demo] # kubectl delete pod my-tomcat-57667b9d9- 8bkns-- force-- grace-period=0-n defaultwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. In this case, you can use the 84smy-tomcat-57667b9d9-kcddv 1Plus 1 `Terminating` 0 84s#. The resource may continue to run on the cluster indefinitely.pod "my-tomcat-57667b9d9- 8bkns" force deleted [root@localhost demo] # kubectl delete pod my-tomcat-57667b9d9-kcddv-force-grace-period=0-n defaultwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "my-tomcat-57667b9d9-kcddv" force deleted [root@localhost demo] # kubectl get podsNAME READY STATUS RESTARTS AGEpod/mypod 1 8hpod/nginx-6c94d899fd-8pf48 1 Running 1 3dpod/nginx-deployment-5477945587-f5dsm 1 / 1 Running 1 2d23hpod/nginx-deployment-5477945587-hmgd2 1 2d23h 1 Running 1 2d23hpod/nginx-deployment-5477945587-pl2hn 1 2d23h step 5: operate on node1 (nodes that have previously logged in to the Harbor warehouse)
We need to delete the project/tomcat image we uploaded to the private warehouse first.
Previously tagged images in node2 also need to be deleted: [root@node2 ~] # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZE192.168.18.134/project/tomcat latest aeea3708743f 3 days ago 529MB [root@node2 ~] # docker rmi 192.168.18.134/project/tomcatUntagged: 192.168.18.134/project/tomcat:latestUntagged: 192.168.18.134/project/tomcat@sha256:8ffa1b72bf611ac305523ed5bd6329afd051c7211fbe5 f0b5c46ea5fb1adba46` Image tag `[root@node2 ~] # docker tag tomcat:8.0.52 192.168.18.134/project/ tomcat` upload image to Harbor`[ root @ node2 ~] # docker push 192.168.18.134/project/tomcat# then we can See the newly uploaded image in the private repository. `View login credential `[root@node2 ~] # cat .docker / config.json {"auths": {"192.168.18.134": {# accessed IP address "auth": "YWRtaW46SGFyYm9yMTIzNDU=" # Verification}} "HttpHeaders": {# header message "User-Agent": "Docker-Client/19.03.5 (linux)"} `generate non-newline CAPTCHA` [root@node2 ~] # cat .docker / config.json | base64-w 0ewoJF1dGhzIjogewoJCSIxOTIuMTY4LjE4LjEzNCI6IHsKQkJImF1dGgiOiAiWVdSdGFXZTR0Z5W05eU1USXpRFU9IgoJCX0KX0sCgkiSHR0EhlYWRlcnOMiiB7CgkJIlVZZWWud8CO4My41Chsa5WW0Cgl9Cgln0 = B7CggkJIlVZZWW05eU1USXpRFU9IW0CX0KX0sCgkiSHR0cEhlYWRlcnOMii B7CgkJIlVZZWWUW05eUSX0cHR0cEhlYWRlcnOMii B7CggkJIlVZZWWWudC8CO4MyW4Cgl9Cgln0CX0sCgkiSHR0cEhlYWRlcnMiiB7CggkJYYWRlcnMiiB7CggkJIlVZZWWUW05eU1USXpRFU9IWW0KX0KX0sCgkiSHR0cEhlYWRlcnMiiB7CggkJIlzZZXWWudC8CWO4MyC92JLjEzNCI6IHsKQkJIm
Special note: when the number of downloads is 0, we will use the image in the private repository to create the resource. Then the image will be downloaded during the pull process, and the value should be changed.
Step 6: create a yaml file for security components in master1 [root@master1 demo] # vim registry-pull-secret.yamlapiVersion: v1kind: Secretmetadata: name: registry-pull-secretdata: .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4LjEzNCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=type: kubernetes.io/dockerconfig json` create secret resource` [root@master1 demo] # kubectl create-f registry-pull-secret.yamlsecret/registry-pull-secret created` View secret resource` [root@master1 demo] # kubectl get secretNAME TYPE DATA AGEdefault-token-pbr9p kubernetes.io/service-account-token 3 10d`registry-pull-secret kubernetes.io/dockerconfigjson 1 25s` [root@master1 demo] # vim tomcat01.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: my-tomcatspec: replicas: 2 template: metadata: labels: app: my-tomcatspec: imagePullSecrets: # credential pulled by certificate -name: registry-pull-secret # name containers:-name: my-tomcat image: 192.168.18.134/project/tomcat # image download location to make this change ports:-containerPort: 80. The following omits multiple lines # press Esc to exit insert mode after modification is completed Input: wq save exit `create tomcat01 resource` [root@master1 demo] # kubectl create-f tomcat01.yamldeployment.extensions/my-tomcat createdservice/my-tomcat created [root@master1 demo] # kubectl get pods,deploy,svc EpNAME READY STATUS RESTARTS AGEpod/my-nginx-d55b94fd-kc2gl 1 2d1hpod/my-nginx-d55b94fd-tkr42 1 Running 1 Running 1 2d1h`pod / my-tomcat-7c5b6db486- bzjlv`1According to 1 Running 0 56s`pod / my-tomcat-7c5b6db486- kw8m4` 1 Running 0 56spod/mypod 1/1 Running 1 9hpod/nginx-6c94d899fd-8pf48 1/1 Running 1 3d1hpod/nginx-deployment-5477945587-f5dsm 1/1 Running 1 3dpod/nginx-deployment-5477945587-hmgd2 1/1 Running 1 3dpod/nginx-deployment-5477945587-pl2hn 1/1 Running 1 3dNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEdeployment.extensions/my-nginx 2 2 2 2d1h`deployment.extensions / my- tomcat` 2 2 2 56sdeployment.extensions/nginx 1 1 1 1 8ddeployment.extensions/nginx-deployment 3 3 3 3dNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/kubernetes ClusterIP 10.0.0.1 443/TCP 10dservice/my-nginx-service NodePort 10. 0.0.210 80:40377/TCP 2d1h`service / my- tomcat` NodePort 10.0.0.235 8080:43654/TCP 56sservice/nginx-service NodePort 10.0.0.242 80:40422/TCP 3d11h# external port is 43654NAME ENDPOINTS AGEendpoints/kubernetes 192.168 .18.128: 6443192.168.13.132 10dendpoints/my-nginx-service 172.17.32.4 10dendpoints/my-nginx-service 172.17.32.4purl 802d1h`endpoints / my- tomcat` 172.17.32.6 3d11h 8080172.17.40.6 56sendpoints/nginx-service 8080 56sendpoints/nginx-service 172.17.40.580 3d11h next we need to verify that there is no problem with resource loading Does the image resource come from our Harbor private repository?
Here we need to pay attention to the number of downloads of images in our private repository.
Result: it shows that the number of downloads has changed from 0 to 2, which means that the two resource images we created were downloaded from the private repository!
Then we use the host browser to verify that the two node addresses of 192.168.18.148 43654 and 192.168.18.145purl 43654 can still access the home page of tomcat.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.