In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
I. Environmental preparation
First of all, my three ubuntu CVMs are configured as follows
Number of cpu memory disk Ubuntu28G20G18.04LTS
And it can ensure that all three machines can connect to the public network. Google CVM is used here, so you don't have to worry about accessing the public network. If it is built locally, please refer to another blog article, local kubeadm, to build a kubernetes cluster https://blog.51cto.com/13670314/2397626.
All commands here are operated under the root user
II. Installation
1. Install Docker and kubeadm on all nodes
Root@instance-ubuntu-1:~# apt-get install curl-y root@instance-ubuntu-1:~# curl-s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add-root@instance-ubuntu-1:~# cat kubeadm join 10.168.0.6 root@instance-ubuntu-1:~# curl 6443-- token k0jhnn.a7l33i18ehbl1aze\-- discovery-token-ca-cert-hash sha256:064420e731f201b1601bb0bb39ccfef0e581a83449a043b60036cfb4537e5c67
As mentioned earlier, the Master node is not allowed to run the user Pod by default, as mentioned above, when you adjust the policy for Master to execute Pod through Taint/Toleration. Kubernetes does this by relying on Kubernetes's Taint/Toleration mechanism. The principle is very simple: once a node is added with a Taint, that is, it is "stained", then all Pod cannot run on that node, because Kubernetes's Pod is "cleanliness addicted". Unless an individual Pod declares that it can "tolerate" this "stain", that is, it declares Toleration, it can only run on this node. Among them, the command to Taint the node is:
Root@instance-ubuntu-1:~# kubectl taint nodes instance-ubuntu-1 foo=bar:NoSchedule
At this point, a key-value pair format Taint is added to the node1 node, that is, foo=bar:NoSchedule
. The NoSchedule in the value means that the Taint will only work when scheduling a new Pod and will not affect the Pod that is already running on the node1, even if they do not have a Toleration.
Now let's go back to the cluster we've built. At this point, if you check the Taint field of the Master node through kubectl describe, you will find something:
Root@instance-ubuntu-1:~# kubectl describe node master
5. Deploy the Dashboard visualization plug-in
In the Kubernetes community, there is a popular Dashboard project that provides users with a visual Web interface to view various information about the current cluster. Not surprisingly, its deployment is also quite simple.
Kubectl create-f
Https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
After the deployment is complete, we can check the status of the + Pod+ corresponding to + Dashboard+.
Root@instance-ubuntu-1:~# kubectl get pods-n kube-system
Kubernetes-dashboard-6948bdb78-f67xk 1/1 Running 0 1m
It should be noted that because Dashboard is a Web Server, many people often inadvertently expose the port of + Dashboard on their public cloud, thus causing security risks. Therefore, after the deployment of the Dashboard project after version 1.7, it can only be accessed locally through Proxy by default. For specific operations, you can check the official documentation of the Dashboard+ project. If you want to access this Dashboard from outside the cluster, you need to use Ingress
6. Deploy the container storage plug-in
Why deploy Rook?
Usually when we set up the container, we need to hang the data volume in the directory on the host, or the file in the Mount Namespace of the container.
This enables hosts and containers to share these directories, but if you start a container on another machine, there is no way to see the data volumes mounted by containers on other machines. This is the so-called typical characteristic of containers: stateless.
At this point, the container needs persistent storage, which is used to save the state of the container. The storage plug-in mounts a remote data volume based on network or other mechanisms in the container, so that the files created in the container are actually saved on the remote storage server or on multiple nodes in a distributed manner, without any binding relationship with the current host. In this way, no matter which other host machine you start the new container, you can request that the specified persistent storage volume be mounted to access the contents saved in the data volume. This is called persistence.
The Rook project is a Ceph-based Kubernetes storage plug-in (it will also add support for more storage implementations later). However, unlike the simple encapsulation of Ceph, Rook adds a large number of enterprise-level functions such as horizontal extension, migration, disaster backup, monitoring and so on to its implementation, making the project a complete, production-level container storage plug-in.
Deploy Ceph storage backend
Kubectl apply-f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml
Kubectl apply-f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml
After the deployment is complete, you can see that the Rook project prevents its own Pod from being among the two Namesoace managed by it:
> root@instance-ubuntu-1:~# kubectl get pods-n rook-ceph-systemNAME READY STATUS RESTARTS AGErook-ceph-agent-7cm42 1 rook-ceph-systemNAME READY STATUS RESTARTS AGErook-ceph-agent-7cm42 1 Running 0 18srook-ceph-operator-78d4587c68c-7fj72 1 Running 0 44srook-discover-2ctcv 1 rook-cephNAME READY STATUS RESTARTS AGErook-ceph-mon0-kxrgh 1 Running 0 18s > root@instance-ubuntu-1:~# kubectl get pods-n rook-cephNAME READY STATUS RESTARTS AGErook-ceph-mon0-kxrgh 1 Running 0 13srook-ceph-mon1-7dk2t 1bat 1 Running 0 5s
In this way, a Rook-based persistent storage cluster runs as a container, and all subsequent Pod created on the Kubernetes project can mount the data volumes provided by Ceph in the container through Persistent Volume (PV) and Persistent Volume Claim (PVC).
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.