In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
In robotics and automation, the control loop is an endless cycle of the state of the control system.
This is an example of a control loop: "automatic temperature regulator in the room."
When you set the temperature, tell the thermostat your "desired state" and the actual temperature of the room is "current state". Through the switch control of the equipment, the automatic temperature regulator makes its current state infinitely close to the desired state.
The controller monitors the common state of the cluster through the apiserver of K8s and strives to change the current state to the desired state.
Pod Controller (Controller) of kubernetes
Controller is the controller used to manage Pod in kubernetes. Through this controller, Pod can always be maintained in a state originally set or expected by users. If the node goes down or the Pod dies for other reasons, the same Pod is created on other nodes to replace the Pod.
A common type of built-in controller that usually interacts with a clustered API server:
ReplicaSet: is the upgraded version of Replication Controller. The difference is the support for selectors.
Deployments: functions such as managing RS and providing updates to Pod, which is recommended to manage RS unless you customize the update choreography
DaemonSet: used to ensure that each node in the cluster runs only one copy of Pod, usually used to implement system-level background tasks
StatefulSets: commonly used to manage stateful applications
Job: one-time task execution
Crontab: scheduled task execution; for example:
Before running Pod, try to download the image locally. Import and export [root@node1 controllers] # cat replicaset-demo.yaml apiVersion: apps/v1kind: ReplicaSetmetadata: name: myapp namespace: defaultspec: replicas: 1 selector: matchLabels: app: myapp release: metadata: name: myapp-httpd labels: app: myapp release: canary environment: qa spec: containers:-name: myapp-container using docker save load according to the Pod scheduling result Image: httpd imagePullPolicy: IfNotPresent ports:-name: http containerPort: 80 [root@node1 controllers] #
Kubernetes minimum running unit Pod, to understand what Pod is, you need to know the container
The container is essentially a process, a process with isolated views and limited resources.
The design of the container itself is a "single process" model, not only one process in the container, because the application of the container is equal to the process, and can only manage the process of PID=1. (the "single-process" model of the Linux container means that the life cycle of the container is equal to the life cycle of the PID=1 process, not that multiple processes cannot be created in the container.) multiple processes can also be started, but only the PID=1 process is managed by the container, while other processes are in the tuoguan state. If there is something wrong with the process of PID=1 at this time, it is dropped by kill or fail, and no one knows, then what about other process resources? No one cares, no one recycles.
So it is often difficult to run a complex program in a container, but the emergence of kubernetes's controller has established its historical position for running multiple applications in the container, at least for now.
What is Pod?
The basic component that makes a Kubernetes cluster run is called Pod.
Pod is the basic unit of execution of a Kubernetes application, that is, it is the smallest and simplest unit created or deployed in the Kubernetes object model. Pod represents a process running on a cluster. A Pod can be a container (or multiple) encapsulating a single application, a storage resource, a unique network IP, and an option that controls how the container should run. Pod represents a deployment unit: "a single instance of an application in Kubernetes", which may consist of a single container or a small number of containers that are tightly coupled and share resources. Docker is the most commonly used container runtime in Kubernetes Pod, but Pod can also support other container runs.
Pod = "process group"
In Kubernetes, Pod is actually an analogous process group concept abstracted for you by the Kubernetes project.
To put it simply, in kubernets, multiple applications are defined as multiple containers, and then multiple containers are run in a Pod resource. You can also say that the combination of multiple containers is called Pod. When Kubernetes runs containers defined in a Pod combination, you will see multiple containers running. At the same time, they will share some of the underlying resources of the system (sharing the same underlying net, uts, ipc, etc.), and these resources all belong to Pod.
Pod has only one logical unit in Kubernetes, and Pod is a unit in which Kubernetes allocates resources. Because the containers inside share some resources, Pod is also the atomic scheduling unit of Kubernetes.
The working characteristics of Pod
Autonomous Pod, autonomous management; connect multiple containers to make abstract encapsulation for containers; a Pod contains multiple containers, sharing the same underlying UTS, IPC, Network, etc.; Pod simulates traditional virtual machines, and a Pod recommends running only one container; shared storage volumes no longer belong to containers but belong to Pod;Pod running on each node, depending on their node tolerance Pod controllers: Replication Controller, ReplicaSet, DeployMent, StatefulSet, DaemonSet, Job
The Operation of Pod in Kubernetes
Run the Pod of a single container
The "one container per Pod" model is the most common Kubernetes use case, the one-container-per-Pod pattern. In this case, you can think of Pod as a wrapper for a single container, and Kubernetes manages the Pod directly, not the container. Pod that runs multiple containers that work together
An Pod may encapsulate an application consisting of multiple tightly coupled co-existing containers that need to share resources. That is, sidecar mode, Pod encapsulates a set of tightly coupled, shared resources, co-addressing containers as a management unit as a manageable entity.
For example:
Self-contained container design Sidecar example Images can be packaged and uploaded to github [root@node1 controllers] # cat pod-tomcat-demo.yaml apiVersion: v1kind: Podmetadata: name: web-2 namespace: defaultspec: initContainers:-image: ik8s.io/sample:v2 imagePullPolicy: IfNotPresent name: war command: ["cp", "/ sample.war" "/ app"] volumeMounts:-mountPath: / app name: app-volume containers:-image: ik8s.io/tomcat:8.0 imagePullPolicy: IfNotPresent name: tomcat8 command: ["sh", "- c" "/ root/apache-tomcat-8.0.5/bin/start.sh"] volumeMounts:-mountPath: / root/apache-tomcat-8.0.5/webapps name: app-volume ports:-containerPort: 8080 hostPort: 8008 volumes:-name: app-volume emptyDir: {} [root@node1 controllers] #
Using DeployMents controller to realize rolling update and grayscale application release
Any application creation must meet three core components:
User expected Pod copy, tag selector, Pod template (the number of existing Pods is not enough than the expected Pod defined in the copy)
Command help: [root@node1 controllers] # kubectl explain deploy [root@node1 controllers] # kubectl explain deploy.spec [root@node1 controllers] # kubectl explain deploy.spec.strategy
This example will show you how to implement a set of application rolling updates, version fallback, Pod quantity updates, and so on.
# cat deployment-myapp-demo.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: myapp-deployment namespace: defaultspec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers:-name: myapp image: nginx imagePullPolicy: IfNotPresent ports:-name: http ContainerPort: 80 [root@node1 controllers] # creation, Delete, view, log view, container description view
Kubectl official directive introduces [root@node1 controllers] # kubectl apply-f deployment-myapp-demo.yaml deployment.apps/myapp-deployment created [root@node1 controllers] # [root@node1 controllers] # kubectl delete-f deployment-myapp-demo.yaml deployment.apps "myapp-deployment" deleted [root@node1 controllers] # [root@node1 controllers] # kubectl get podsNAME READY STATUS RESTARTS AGEmyapp-deployment-5b776d9cf7-29s7b 1 Running 0 9m26smyapp-deployment-5b776d9cf7-8hb8c 1 bind 1 Running 0 9m26s [root@node1 controllers] # [root@node1 controllers] # kubectl logs myapp-deployment-5b776d9cf7-8hb8c [root@node1 controllers] # [root@node1 controllers] # kubectl describe pods myapp-deployment-5b776d9cf7-8hb8c.Events:Type Reason Age From Message -Normal Scheduled default-scheduler Successfully assigned default/myapp-deployment-5b776d9cf7-8hb8c to node2 Normal Pulled 10h kubelet Node2 Container image "nginx" already present on machine Normal Created 10h kubelet, node2 Created container myapp Normal Started 10h kubelet, node2 Started container myapp [root@node1 controllers] # Update the number of Pod Scrolling update
Update the current replica set as 5 [root@node1 controllers] # kubectl patch deployment myapp-deploy-p'{"spec": {"relicas": 5}}'# update status in real time [root@node1 controllers] # kubectl get pods-w update image version number, scroll update
Commands used: kubectl set, kubectl edit, kubectl apply, Kubectl rollout# updates container image version to the latest [root@node1 controllers] # kubectl set image deployment/myapp-deployment nginx=nginx:latest # View rolling update history [root@node1 controllers] # kubectl rollout history deployment myapp-deployment # View ReplicaSet space image version number status [root@node1 controllers] # kubectl get rs-l app=myapp-o wide# View image field [root@node1 controllers] # kubectl describe pods myapp-deployment-5b776d9cf7-8hb8c | grep 'Image' # View How is the process of updating? What did you do [root@node1 controllers] # kubectl describe deployment/myapp-deployment simulated Canary release
Change maxSurge and maxUnavailable update policy # Command help [root@node1 controllers] # kubectl explain deploy.spec.strategy.rollingUpdate# patch first to change the current update policy Publish [root@node1 controllers] # kubectl patch deployment myapp-deployment-p'{"spec": {"strategy": {"rollingUpdate": {"maxSurge": 1 as a simulated canary "maxUnavailable": 0}'# View [root@node1 controllers] # kubectl describe deploy myapp-deployment# then update (released by Canary) [root@node1 controllers] # kubectl set image deployment myapp-deployment myapp=nginx:v1 & & kubectl rollout pause deployment myapp-Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...# View Historical version Use the "--record" parameter to see the command [root@node1 controllers] # kubectl rollout history deployment myapp-deployment# rollback without version number used in each version. By default, rollback to the previous version [root@node1 controllers] # kubectl rollout undo deployment myapp-deployment# uses "--to-reversion= [N]" to roll back to the specified version [root@node1 controllers] # kubectl rollout undo deploy myapp-deployment-- to-reversion=1# is no problem as above. Perform the update operation [root@node1 controllers] # kubectl rollout resume deploy myapp-deployment again
Use DaemonSet to specify a copy of Pod and use a directory of the system as a storage volume
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.