In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Containers, images, and repositories are known as the three basic components of containers. Playing K8S naturally cannot escape the fate of building image repositories. I don't think it is necessary to reiterate the necessity of private image repositories here. Today's article introduces the process of fully deploying a private harbor image repository in K8S in an experimental environment.
Does K8S have to use Harbor as an image repository? Of course not, but by comparison you will know that no matter where Harbor is trying and has become almost your only choice, just like K8S as the de facto standard for container orchestration, you have almost no second better choice.
This is also the purpose that the author ponders painstakingly, and must deploy it successfully and write this article to the reader.
Cut the nonsense, get back to business, and introduce the experimental environment:
1,CentOS 7 minimal
2, single-node K8S master 1.15.5; (due to the large 1.16 changes, all 1.15 maximum versions are enabled)
3,helm 2.15
4,harbor
Helm deployment
I. Helm client installation
Helm can be installed in many ways, and binary installation is used here. For more installation methods, please refer to Helm's official help documentation.
Method 1: use the official script to install with one click
Curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.shchmod 700 get_helm.sh./get_helm.sh
Second, the Helm server installs Tiller
Note: first install the socat software (yum install-y socat) on each node on the K8S cluster, otherwise the following error will be reported:
Error forwarding port 44134 to pod dc6da4ab99ad9c497c0cef1776b9dd18e0a612d507e2746ed63d36ef40f30174, uid: unable to do port forwarding: socat not found.Error: cannot connect to Tiller
Centos7 is installed by default, so I ignore it here, please confirm the installation.
Tiller is deployed in the Kubernetes cluster as Deployment, and the installation can be easily completed by using the following instructions:
Helm init
Third, authorize Tiller
Because the server Tiller of Helm is a Deployment deployed under Kube-System Namespace in Kubernetes, it will connect to Kube-Api to create and delete applications in Kubernetes.
Since Kubernetes version 1.6, API Server has enabled RBAC authorization. Current Tiller deployments do not define an authorized ServiceAccount by default, which can result in access to API Server denied. So we need to explicitly add authorization to the Tiller deployment.
Create a service account and binding role for Kubernetes for Tiller:
Kubectl create serviceaccount-namespace kube-system tillerkubectl create clusterrolebinding tiller-cluster-rule-clusterrole=cluster-admin-serviceaccount=kube-system:tiller
Update the API object with kubectl patch:
Kubectl patch deploy-- namespace kube-system tiller-deploy-p'{"spec": {"template": {"spec": {"serviceAccount": "tiller"}'
Check whether the authorization is successful
Kubectl get deploy-- namespace kube-system tiller-deploy-- output yaml | grep serviceAccount serviceAccount: tiller serviceAccountName: tiller
Verify whether Tiller is installed successfully
Kubectl-n kube-system get pods | grep tillertiller-deploy-6d68f5c78f-nql2z 1 Running 0 5mhelm versionClient: & version.Version {SemVer: "v2.15.0", GitCommit: "c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState: "clean"} Server: & version.Version {SemVer: "v2.15.0", GitCommit: "c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState: "clean"}
Harbor installation
For details, you can take a look at the official introduction https://github.com/goharbor/harbor-helm.
Add a helm repository:
Helm repo add harbor https://helm.goharbor.io
The official introduction tutorial assumes that you are all experts (I greet it silently here). Here are some basic details:
First, search for the harbor chart project:
Helm search harbor
Second, download it locally to make it easy to modify values.yaml:
Helm fetch harbor/harbor
Extract the downloaded project package and go to the decompression path to modify the values.yaml file:
Tar zxvf harbor-1.2.1.tgz cd harbor vim values.yaml
You can refer to the official introduction to modify the parameters, but for beginners, except for data persistence, all other parameters are modified by default. If you are familiar with them later, modify them one by one:
Change all the storageClass of values.yaml to storageClass: "nfs", which I have already deployed in advance
If you miss it, you can go back to my tutorial "preliminary Kubernetes dynamic Volume Storage (NFS)" and add it: https://blog.51cto.com/kingda/2440315
Of course, you can modify this file directly with one sentence:
Sed-I 's#storageClass: "" # storageClass: "nfs" # g' values.yaml
Everything else is defaulted, and then the installation begins:
Helm install-- name harbor-v1. -wait-timeout 1500-debug-namespace harbor
Since the automatic creation of PV and PVC may not be as fast as you might think, many pod will start to report errors, so be patient and wait for it to automatically restart multiple times.
The above installation command may be stuck all the time, please be patient and wait for all pod to start successfully before helm will detect the installation status of all pod and finish execution.
Since we just installed it with the default settings, helm exposes the harbor service by starting ingress by default, so if you don't install the ingress controller in advance, it doesn't affect the normal operation of harbor, but you can't access it.
So, here's how to install the ingress controller:
K8S official source code introduction, here directly post one-click installation script file:
ApiVersion: v1kind: Namespacemetadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/ Name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---kind: ConfigMapapiVersion: v1metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx---apiVersion: v1kind: ServiceAccountmetadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx- -- apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxrules:-apiGroups:-"" resources:-configmaps-endpoints-nodes-pods-secrets verbs:-list-watch-apiGroups:-"" Resources:-nodes verbs:-get-apiGroups:-"" resources:-services verbs:-get-list-watch-apiGroups:-"extensions" resources:-ingresses verbs:-get-list-watch-apiGroups:-"" resources:-events verbs:-create -patch-apiGroups:-extensions "resources:-ingresses/status verbs:-update---apiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxrules:-apiGroups: -"resources:-configmaps- Pods-secrets-namespaces verbs:-get-apiGroups:-"" resources:-configmaps resourceNames: # Defaults to "-" # Here: "-" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. -"ingress-controller-leader-nginx" verbs:-get- update-apiGroups:-"" resources:-configmaps verbs:-create-apiGroups:-"" resources:-endpoints verbs:-get---apiVersion: rbac.authorization.k8s.io/v1beta1kind: RoleBindingmetadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app .kubernetes.io / name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-rolesubjects:-kind: ServiceAccount name: ingress-nginx---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: Ingress-nginxroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrolesubjects:-kind: ServiceAccount name: ingress-nginx---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginxspec: # replicas: 1 selector: matchLabels: app.kubernetes.io/name: Ingress-nginx app.kubernetes.io/part-of: ingress-nginx updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: serviceAccountName: nginx-ingress -serviceaccount hostNetwork: true containers:-name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0 args:-/ nginx-ingress-controller-configmap=$ (POD_NAMESPACE) / nginx-configuration-tcp-services-configmap=$ (POD_NAMESPACE) / tcp-services -udp-services-configmap=$ (POD_NAMESPACE) / udp-services- publish-service=$ (POD_NAMESPACE) / ingress-nginx-annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop:-ALL add:-NET_BIND _ SERVICE # www-data-> 33 runAsUser: 33 env:-name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name-name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace Ports:-name: http containerPort: 80-name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: / healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 TimeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: / healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 Murray-
You can install using kubectl.
If you have parsed the default ingress access domain name to any node in K8S, you can log in directly using the default account and password.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.