In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to verify deployment in kubernetes". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to verify deployment in kubernetes".
K8s usually does not create pod directly, but manages pod through controller to achieve replica management, rolling upgrade, and cluster-level self-healing capabilities. Controller includes Deployment, ReplicaSet, DaemonSet, StatefuleSet, Job, etc.
The pod created by 1.kubectl run has no self-healing function because the pod is not managed by controller.
2.Deployment is the most commonly used controller for deploying stateless services to manage replicaset and update pod
3. Once the Deployment is created, Deployment contorller will immediately create a replica set of ReplicaSet, and ReplicaSet will create the required pod. When you update deployment, the deployment controller creates a new ReplicaSet replica set for deployment and gradually creates pod; in the new replica set to remove pod from the old replica set, achieving the effect of rolling updates.
4. The release update of the Deployment is triggered when and only if the content of the Depoyment's Pod template field changes.
Create deployment
# run directly from the command line
Kubectl create deployment nginx-image=nginx
# create via configuration file yaml
Kubectl create deployment nginx-- image=nginx-- dry-run='client'-o yaml > dep.yaml
Kubectl apply-f dep.yaml
# Delete deployment
Kubectl delete deployment nginx
Kubectl delete-f dep.yaml
# verify that pod managed by deployment is self-healing
Kubectl run has a pod and kubectl create deployment has a pod managed by controller.
Kubectl get pods-o wide checks the node of pod distribution
Kubectl drain node2 expels pod distributed on node2
Kubectl get pods-o wide can see that node2 scheduling is disabled, and controller-managed pod is migrated to node1 running, while kubectl run's pod has been deleted
Kubectl get rs can see the corresponding replicaset controller.
# View the specific configuration of deployment
Kubectl edit deployment nginx
# expand pod to 2
Kubectl scale deployment nginx-replicas=2
# View the rolling update status of deployment nginx
Kubectl rollout status deployment nginx
# View deployments
Kubectl get deployments
# View replicasets
Kubectl get rs-w
# View the events of deployment
Kubectl describe deployment
# View update history
Kubectl rollout history deployment nginx
# rollback to the previous version, you cannot rollback a paused Deployment unless you continue (resume) the Deployment
Kubectl rollout undo deployment nginx
Kubectl rollout undo deployment nginx-to-revision=2
# pause updates
Kubectl rollout pause deployment nginx
# resume updates
Kubectl rollout resume deployment nginx
# Export the configuration file of deployment
Kubectl get deployment deploy_name-o yaml > deployment.yaml
# when deleting a ReplicaSet, its dependent objects will not be deleted
Kubectl delete replicaset my-repset-cascade=false
Exposing pod Services with service
Kubectl expose deployment nginx-port=80-type=NodePort
# View service
Kubectl get svc
# check the endpoint corresponding to service
Kubectl get endpoints
# Verification
Curl http://service-clusterIP
Curl http://nodeIP:32038
For the nginx run on the above command line, you can use the following configuration file to cloud
Vi nginx-dep-service.yaml
ApiVersion: apps/v1
Kind: Deployment
Metadata:
Name: web-nginx
Labels:
App: nginx
Spec:
Replicas: 1
Selector:
MatchLabels:
App: nginx
Template:
Metadata:
Labels:
App: nginx
Spec:
Containers:
-name: nginx
Image: nginx
--
ApiVersion: v1
Kind: Service
Metadata:
Name: web-nginx
Labels:
Apps: nginx
Spec:
Selector:
App: nginx
Ports:
-name: web-nginx
Protocol: TCP
Port: 80
NodePort: 32600
TargetPort: 80
Type: NodePort
Kubectl apply-f nginx-dep-service.yaml
Thank you for reading, the above is the content of "how to verify deployment in kubernetes", after the study of this article, I believe you have a deeper understanding of how to verify deployment in kubernetes, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.