In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Related content:
Kubernetes deployment (1): architecture and function description
Kubernetes deployment (2): initialization of system environment
Kubernetes deployment (3): CA certificate making
Kubernetes deployment (4): ETCD cluster deployment
Kubernetes deployment (5): Haproxy, Keppalived deployment
Kubernetes deployment (6): Master node deployment
Kubernetes deployment (7): Node node deployment
Kubernetes deployment (8): Flannel network deployment
Kubernetes deployment (IX): CoreDNS, Dashboard, Ingress deployment
Kubernetes deployment (X): stored glusterfs and heketi deployment
Kubernetes deployment (11): managed Helm and Rancher deployment
Kubernetes deployment (12): helm deployment harbor enterprise image repository
Helm deployment
Official download address of helm: https://github.com/helm/helm/releases
Officially available chart list: https://hub.kubeapps.com
All the software and configuration files are saved in the Baidu network disk mentioned in the previous article: Baidu shared link in this article
Introduction to helm
Helm is a tool that simplifies the installation and management of Kubernetes applications. Think of it as apt/yum/homebrew.
Helm has two parts: client (helm) and server (tiller)
Tiller runs within your Kubernetes cluster and manages the release (installation) of chart
Helm can be run on your laptop or anywhere.
A chart is a Helm package that contains at least two things: a description of the package (Chart.yaml) one or more templates containing the Kubernetes manifest file chart can be stored on disk or obtained from a remote chart repository (such as the Debian or RedHat package). Core terms Chart: a helm package; Repository:Charts repository, https/http server; Release: an instance of a specific Chart deployed on the target cluster; program architecture helm: client, manage local Chart repository, manage Chart, interact with Tiller server, send Chart, instance installation, query, uninstall, etc.; Tiller: server, receive Charts and Config from helm, merge to generate relase Helm deployment helm can be deployed on any machine, not necessarily on kubernetes's server, but you need to install kubectl, that is, you need to have a kube configuration file in the user's home directory, because helm needs to communicate with apiServer. [root@node-01 ~] # ll .kube / total 12drwxr-xr-x 3 root root 23 Dec 25 11:28 cache-rw- 1 root root 6264 Dec 25 16:15 configdrwxr-xr-x 3 root root 4096 Jan 2 15:09 http-cache starts deploying [root@node-01 k8s] # wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.1-linux-amd64.tar.gz[root@node-01 k8s] # tar zxf helm -v2.12.1-linux-amd64.tar.gz [root@node-01 k8s] # cd linux-amd64/ [root@node-01 linux-amd64] # mv helm / usr/bin/
For some reason, we cannot download the tiller image directly from google, so we need to download the image tiller-image-v2.12.1.tar.gz shared by my network disk, and then load the image on each node node
[root@node-04 ~] # docker load
< tiller-image-v2.12.1.tar.gzapiVersion: v1kind: ServiceAccountmetadata: name: tiller namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: tillerroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects: - kind: ServiceAccount name: tiller namespace: kube-system[root@node-01 helm]# kubectl create -f rbac-config.yaml[root@node-01 helm]# helm init --service-account tillerCreating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init` with the --tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installationHappy Helming![root@node-01 helm]# kubectl -n kube-system get pod|grep tillertiller-deploy-85744d9bfb-cm5jz 1/1 Running 0 11m[root@node-01 helm]# helm versionClient: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}helm常用命令: release管理: installdeleteupgrade/rollbacklisthistory:release的历史信息;status:获取release状态信息;chart管理:createfetchgetinspectpackageverify 至此helm就已经部署完了,下面会通过helm安装k8s的管理平台,也顺便演示helm的使用。 Rancher部署rancher简介Rancher是一个企业级多集群Kubernetes管理平台;用户可以在Rancher上配置和管理公有云(如GKE、EKS、AKS、阿里云、华为云等)上托管的Kubernetes服务,亦可向Rancher中导入已有集群。对于所有Kubernetes集群与服务,用户均可以在Rancher上进行集中身份认证(包括GitHub、AD/LDAP、SAML等)。添加chart仓库 helm官方仓库没有rancher的chart包,所以我们需要添加rancher官方chart仓库。 [root@node-01 helm]# helm repo add rancher-stable https://releases.rancher.com/server-charts/stable"rancher-stable" has been added to your repositories[root@node-01 helm]# helm search rancher-stable/rancher NAME CHART VERSION APP VERSION DESCRIPTION rancher-stable/rancher 2018.12.4 v2.1.4 Install Rancher Server to manage Kubernetes clusters acro...安装cert-manager安装成功后会详细显示安装的所有资源[root@node-01 helm]# helm install stable/cert-manager --name cert-manager --namespace kube-systemNAME: cert-managerLAST DEPLOYED: Thu Jan 3 15:35:22 2019NAMESPACE: kube-systemSTATUS: DEPLOYEDRESOURCES:==>V1/ServiceAccountNAME SECRETS AGEcert-manager 1 1s = > v1beta1/ClusterRoleNAME AGEcert-manager 1s = > v1beta1/ClusterRoleBindingNAME AGEcert-manager 1s = > v1beta1/DeploymentNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEcert-manager 1 1101s = > v1/Pod (related) NAME READY STATUS RESTARTS AGEcert-manager-7d4bfc44ff-5flvg 0There 1 ContainerCreating 0 0sNOTES:cert-manager has been deployed successfully!In order to begin issuing certificates You will need to set up a ClusterIssueror Issuer resource (for example, by creating a 'letsencrypt-staging' issuer). More information on the different types of issuers and how to configure themcan be found in our documentation: https://cert-manager.readthedocs.io/en/latest/reference/issuers.htmlFor information on how to configure cert-manager to automatically provisionCertificates for Ingress resources Take a look at the `ingress- shim`rooms: https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html[root@node-01 helm] # install rancher server [root @ node-01 helm] # helm install rancher-stable/rancher-- name rancher--namespace cattle-system-- set hostname=rancher.cnlinux.club
By default, Rancher automatically generates the CA root certificate and issues the certificate using cert-manager. Therefore, hostname=rancher.cnlinux.club is set here, and the UI can only be accessed through the domain name later.
Add a host alias for Agent Pod (optional)
If you do not have an internal DNS server but specify the Rancher server domain name by adding / etc/hosts host aliases, then no matter which way you create the K8S cluster (custom, import, Host driver, etc.), after the K8S cluster is running, because cattle-cluster-agent Pod and cattle-node-agent can not find the Rancher server through DNS records, resulting in unable to communicate.
Cattle-cluster-agent Pod and cattle-node-agent can communicate properly by adding host aliases (/ etc/hosts) to them (as long as IP addresses are interoperable).
Note: replace the domain name and IP in the following command
Cattle-cluster-agent pod [root @ node-01 helm] # kubectl-n cattle-system patch deployments cattle-cluster-agent-- patch'{"spec": {"template": {"spec": {"hostAliases": [{"hostnames": [ "rancher.cnlinux.club"] "ip": "10.31.90.200"} 'cattle-node-agent cattle-system patch daemonsets cattle-node-agent [root @ node-01 helm] # kubectl-n cattle-system patch daemonsets cattle-node-agent-patch' {"spec": {"template": { "spec": {"hostAliases": [{"hostnames": ["rancher.cnlinux.club"] "ip": "10.31.90.200"}]} 'visit rancher access https://rancher.cnlinux.club/ through browser The following page appears, and then set the password
Then the following interface can appear to prove that it is working properly.
3. At this point, you can manage pod, ingress, service, and other resources through rancher.
Rancher also creates new K8s clusters. If you manage other existing K8s clusters, you can choose to import them as shown in the figure below.
All the documents related to K8s will be updated one after another. If you think I have written well, I hope you will pay more attention to it. Thank you very much!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.