In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
First of all, you should know that there are two deployment tools in the k8s cluster
Kubeadm: a tool for automating the deployment of K8s clusters.
Kubectl: K8s command line tool for receiving instructions entered by the user.
What is kubernetes made of?
At the hardware level, an kubernetes cluster consists of many nodes, which are divided into the following two types:
Master node: it carries the control panel work nodes that kubernetes controls and manages the entire cluster system: they run the applications actually deployed by the user.
Control Panel (master)
The control panel is used to control the cluster and make it work. It contains multiple components that can be run on a single primary node or deployed on multiple primary nodes through replicas to ensure high availability.
The components in master are:
Note: the master node does not participate in the work by default, and we can set it to participate in the work if necessary, but this is generally not recommended, because the master node is used to control and manage the cluster, so it is very important to keep it out of the work by default.
APIserver:apiserver is the front-end interface in K8s cluster, through which various client tools and other components of K8s can manage various resources in the cluster.
Scheduler: is responsible for deciding which node to delete and run where to put the pod. In the process of scheduling, the node status of the cluster, the current load status of each node, and the requirements of high availability and performance of the corresponding scheduling will be considered.
Controller manager: responsible for managing various resources of K8s cluster. Ensure that the resource is in the desired state of the user.
Ectd: multiple data centers, responsible for saving configuration information of K8s cluster and status information of various resources. When data changes, etcd will notify other components of K8s cluster.
Pod: the smallest unit in the K8s cluster. Run one or more container in each pod (usually only one)
The Node node components are:
Kubelet: the agent of the Node node. When Schedule determines that pod is running on a node, it will send the specific configuration information (image,volume) of the pod to the kubelet,kubelet of the node to create and run the container based on this information, and report the running status to the master.
Automatic repair function: if a container in a node goes down, it will automatically kill it and then recreate a container.
Kube-Proxy (load balancer): service logically represents multiple pod at the back end, and outsiders access pod through service.
How are requests received by service forwarded to pod? This is what kube-proxy is going to do. Load balancing is achieved through iptables rules.
How do components interact with each other? :
First, users send deployment commands through kubectl, send them to APIserver,APIserver in the cluster and get the instructions, then notify Controller Manager to create a resource object for deployment. After confirmation, the instructions are passed to APIserver,APIserver to communicate with etcd, and etcd will call up various resource information in the cluster. Next, Schedule performs the scheduling and decides which node in the cluster to assign pod to run. Finally, kubelet creates and runs pod on their respective nodes as required.
K8s basic operation
The location of the yaml files for each component in K8s:
There are four default namespaces for kubernetes:
1) create a controller and deploy a Deployment resource object
[root@master] # kubectl run nginx-deploy-image=nginx-port=80-replicas=2
Parameter explanation:
Kubectl run: runs a resource object, followed by a custom name
-- image: specify the image, that is, the service you want to deploy
-- port: specify the port of the service
-- replicas: create 2 copies
/ / View the Deployment resource object [root@master ~] # kubectl get deployments. -o wide
Parameter explanation:
-o wide: add this parameter to make the display a little broader
READY: indicates the expected value reached, and 2 prime 2 indicates that there are 2 available.
AVAILABLE: number expressed as available
It automatically downloads the image (nginx image), or uploads the image to the server in advance and downloads it locally.
/ / check the node on which pod is running: (including displaying the ip address of pod) [root@master ~] # kubectl get pod-o wide
The ip address assigned by the above pod specifies the network segment in the official pod network when we initialize the cluster.
There are two types of containers in a pod (among them):
USR,MNT,PID is isolated from each other.
UTS,IPC,NET is shared with each other.
2) service- exposed resources: (expose ports to public network)
# if the public network wants to access the services provided in K8s, it must create a service resource object.
[root@master] # kubectl expose deployment nginx-deploy-name=myweb-port=80-type=NodePortservice/myweb exposed
Parameter explanation:
Expose: exposed port
Nginx-deploy: exposes a resource object named nginx-deploy
-- name: custom name myweb
-- port: specify port 80
-- type: specifies the type nodeport
# in fact, the above is equivalent to creating a service.
/ / View the resource objects mapped by service:
[root@master ~] # kubectl get service
Explanation:
CLUSTER-IP: a unified cluster interface, which is the address of communication within the cluster.
80:32326/TCP:80 is the service port, and the rear port is exposed to the public network (randomly generated, range is 30000-32767)
/ / the public network test accesses the cluster web interface through the exposed port:
Url: http://172.16.1.30:30400/
What you need to know is that any host in the cluster can be accessed, not just master.
3) manually delete the containers on the node, and then look at the Deployment resource object again to see if the Pod is maintained at the number expected by the user? Is there a change in the IP address?
[root@master ~] # kubectl get pod-o wide # View the node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-deploy-59f5f764bb-h7pv2 1 Running 0 44m 10.244.1.2 node01 nginx-deploy-59f5f764bb-x2cwj 1 Running 0 44m 10.244 assigned by pod .2.2 node02 delete the container on node01: [root@node01 ~] # docker ps
[root@node01] # docker rm-f 47e17e93d911
/ / check the Deployment resource object again: [root@master ~] # kubectl get deployments. -o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORnginx-deploy 2 nginx-deploy nginx run=nginx-deploy// 2 2 2 48 m nginx-deploy nginx run=nginx-deploy// view pod [root @ master ~] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-deploy-59f5f764bb-h7pv2 1max 1 Running 0 50m 10.244.1.2 node01 nginx-deploy-59f5f764bb-x2cwj 1/1 Running 0 50m 10.244.2.2 node02
You can see that the pod remains at the desired number, and the ip address of the pod remains the same, you will find that when you delete the container on the node, it will automatically generate a new pod. Why is that?
In fact, it is through the controller manager component in the cluster to ensure that the resource is in the desired state, that is, when you define a copy, you define 2, it will ensure that you have been running 2 pod, if less, will be increased.
The underlying principle of load balancing based on kube-proxy
1) first, let's create a resource object for deployment+service and define the number of copies.
[root@master yaml] # vim nginx.yaml kind: DeploymentapiVersion: extensions/v1beta1metadata: name: nginx-deployspec: replicas: 3 template: metadata: labels: app: web-server spec: containers:-name: nginx image: nginx---apiVersion: v1kind: Servicemetadata: name: nginx-svcspec: type: NodePort selector: app: web-server ports:-protocol: TCP port: 8080 targetPort: 80 nodePort: 30000 / / execute yaml file: [root@master yaml] # kubectl apply-f nginx.yaml deployment.extensions/nginx-deploy configuredservice/nginx-svc created// View pod: [root @ master yaml] # kubectl get pod NAME READY STATUS RESTARTS AGEnginx-deploy-56558c8dc7-gq9dt 1 Running 0 18snginx-deploy-56558c8dc7-jj5fv 1 + 1 Running 0 18snginx-deploy-56558c8dc7-z5sq4 1 + 1 Running 0 17s
/ / View service:
2) go to pod to modify the default access interface of each pod (make sure the interface is different).
3) visit the interface to verify whether there will be a polling effect:
[root@master yaml] # curl 172.16.1.30:30000nginx-version:No1 [root@master yaml] # curl 172.16.1.30:30000nginx-version:No2 [root@master yaml] # curl 172.16.1.30:30000nginx-version:No3 [root@master yaml] # curl 172.16.1.30:30000nginx-version:No2 [root@master yaml] # curl 172.16.1.30:30000nginx-version:No1 [root@master yaml] # curl 172.16.1.30 : 30000nginx-version:No3 [root@master yaml] # curl 172.16.1.30:30000nginx-version:No1 [root@master yaml] # curl 172.16.1.30:30000nginx-version:No2
You can see that there is a polling effect when visiting the page. Although it relies on kube-proxy components to achieve the underlying principle of load balancing, it is mainly achieved through iptables rules.
The detailed process is as follows:
/ / first, let's check the address of Cluster ip:
/ / next, by viewing the iptables rules:
[root@master yaml] # iptables-save
/ / find the forwarding rules of the cluster ip:
# # it will jump to another rule. Let's check this value:
The above values are the average values randomly calculated by it to achieve load balancing. there are three copies in front of us. The first time is the probability of 1max 3 (% 0.3). After the allocation, the second and third times are the probability of 1max 2 (% 0.5). The third time it does not show a probability in detail, but we know that its probability is also 0.5.
/ / next, let's look at the ip address through another rule of its jump above:
/ / DNAT is the destination address translation. Let's look at the information of pod:
You can see that the load is evenly distributed to each node, which reflects one of its principles.
-this is the end of this article. Thank you for reading-
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.