In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The Deployment object, as its name implies, is the object used to deploy the application. It is one of the most commonly used objects in Kubernetes, and it provides a declarative definition method for the creation of ReplicaSet and Pod, eliminating the need to manually create ReplicaSet and Pod objects as in the previous two articles (using Deployment instead of creating ReplicaSet directly because Deployment objects have many features that ReplicaSet does not have, such as rolling upgrades and rollbacks).
Deployment:
Kind: Deployment
ApiVersion: extensions/v1beta1
Metadata:
Name: skx
Spec:
Replicas: 3
Template:
Metadata:
Labels:
App: skx_server
Spec:
Containers:
Name: httpd-test
Image: 192.168.1.10:5000/httpd:v1
Ports:containerPort: 80
PS: note that in the Deployment resource object, the Port field can be added, but this field is only for the user to view and does not actually take effect.
As follows:
Ports:
ContainerPort: 80
SERVICE
Kind: Service
ApiVersion: v1
Metadata:
Name: skx-svc
Spec:
Selector:
App: skx_server
Ports:
Protocol: TCP
Port: 80
TargetPort: 80
Associate hosts:
Kind: Service
ApiVersion: v1
Metadata:
Name: skx-svc
Spec:
Type: NodePort
Selector:
App: skx_server
Ports:
Protocol: TCP
Port: 80
TargetPort: 80
NodePort: 30123
Change the pod access interface:
[root@master ~] # kubectl exec-it skx-694cc5db89-45nvk / bin/bash
Root@skx-694cc5db89-45nvk:/usr/local/apache2# echo no.1 > htdocs/index.html
Root@skx-694cc5db89-45nvk:/usr/local/apache2# exit
View rules:
[root@master ~] # iptables-save
SNAT:Source NAT (source address translation) DNAT:Destnation (destination address translation) MASQ: dynamic source address translation
Iptables rules are used for load balancing implemented by Service. IPVS
10.107.64.232
KUBE-SVC-QDLMDMK46RWAY7QJ
Find the corresponding firewall rules
[root@master ~] # kubectl get svc
[root@master ~] # iptables-save | grep 10.107.64.232
[root@master ~] # iptables-save | grep KUBE-SVC-QDLMDMK46RWAY7QJ
[root@master ~] # iptables-save | grep KUBE-SEP-YPYQNHI3JGSZCBF5
Vs.
[root@master] # kubectl get pod-o wide
Same as ip.
Roll back to the specified version:
Delete previously created resources:
[root@master] # kubectl delete-f skx-svc.yaml
Service "skx-svc" deleted
[root@master] # kubectl delete-f skx.yaml
Deployment.extensions "skx" deleted
[root@master] # kubectl get deployments.
No resources found.
[root@master ~] # vim skx.yaml
Kind: Deployment
ApiVersion: extensions/v1beta1
Metadata:
Name: skx
Spec:
RevisionHistoryLimit: 10 / / add
Replicas: 3
Template:
Metadata:
Labels:
App: skx_server
Spec:
Containers:
Name: httpd-test
Image: 192.168.1.10:5000/httpd:v1
Ports:containerPort: 80
Prepare private images used by the three versions to simulate different images for each upgrade.
[root@master ~] # mv skx.yaml skx1.yaml
[root@master ~] # cp skx1.yaml skx2.yaml
[root@master ~] # cp skx1.yaml skx3.yaml
[root@master ~] # vim skx1.yaml
Line 15:
Image: 192.168.1.10:5000/httpd:v1
[root@master ~] # vim skx2.yaml
Line 15:
Image: 192.168.1.10:5000/httpd:v2
[root@master ~] # vim skx3.yaml
Line 15:
Image: 192.168.1.10:5000/httpd:v3
The three yaml files here specify different versions of the image.
Run a service and record a version information.
[root@master] # kubectl apply-f skx1.yaml-- record
Deployment.extensions/skx created
[root@master] # kubectl apply-f skx2.yaml-- record
Deployment.extensions/skx configured
[root@master] # kubectl apply-f skx3.yaml-- record
Deployment.extensions/skx configured
Check what version information is available
[root@master ~] # kubectl rollout history deployment skx
Deployment.extensions/skx
REVISION CHANGE-CAUSE
1 kubectl apply-filename=skx1.yaml-record=true
2 kubectl apply-filename=skx2.yaml-record=true
3 kubectl apply-- filename=skx3.yaml-- record=true
Run and upgrade Deployment resources and record version information.
[root@master] # kubectl apply-f skx2.yaml-- record
Deployment.extensions/skx configured
At this point, you can run an associated Service to voluntarily verify whether the upgrade is successful.
[root@master] # kubectl apply-f skx-svc.yaml
Service/skx-svc created
[root@master ~] # kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
Httpd-svc NodePort 10.97.81.154 80:31194/TCP 43h
Kubernetes ClusterIP 10.96.0.1 443/TCP 5d17h
Skx-svc NodePort 10.96.174.199 80:30123/TCP 16s
[root@master ~] # curl 10.96.174.199
Songkaixiong | test-web | httpd | v3
Rollback to the specified version.
[root@master] # kubectl rollout undo deployment skx-- to-revision=1
Deployment.extensions/skx rolled back
Curl 10.96.174.199
Songkaixiong | test-web | httpd | v1
Using label to control the position of Pod
Put a label on node03
[root@master ~] # kubectl label nodes node03 disk=ssd
Node/node03 labeled
Specify to view the node03 tag:
[root@master ~] # kubectl get nodes-- show-labels | grep node03
Node03 Ready 5d17h v1.15.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node03,kubernetes.io/os=linux
Do not specify to view all node labels:
[root@master] # kubectl get nodes-- show-labels
NAME STATUS ROLES AGE VERSION LABELS
Master Ready master 5d17h v1.15.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
Node02 Ready 5d17h v1.15.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux
Node03 Ready 5d17h v1.15.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node03,kubernetes.io/os=linux
Delete the skx1 resource:
[root@master] # kubectl delete-f skx1.yaml
Deployment.extensions "skx" deleted
[root@master] # kubectl delete-f skx-svc.yaml
Service "skx-svc" deleted
Modify the skx1.yaml configuration file:
[root@master ~] # vim skx1.yaml
Kind: Deployment
ApiVersion: extensions/v1beta1
Metadata:
Name: skx
Spec:
RevisionHistoryLimit: 10
Replicas: 3
Template:
Metadata:
Labels:
App: skx_server
Spec:
Containers:
Name: httpd-test
Image: 192.168.1.10:5000/httpd:v1
Ports:containerPort: 80
NodeSelector: / / add node selector
Disk: ssd / / is consistent with the tag content
[root@master] # kubectl apply-f skx1.yaml
Deployment.extensions/skx created
All three run on node03:
[root@master] # kubectl get pod-o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
Skx-55c4dc6dbc-7ztl9 1/1 Running 0 73s 10.244.2.27 node03
Skx-55c4dc6dbc-jsms7 1/1 Running 0 73s 10.244.2.28 node03
Skx-55c4dc6dbc-rfss7 1/1 Running 0 73s 10.244.2.26 node03
Delete the label:
View the label of node03:
[root@master] # kubectl get nodes-- show-labels node03
NAME STATUS ROLES AGE VERSION LABELS
Node03 Ready 5d17h v1.15.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node03,kubernetes.io/os=linux
Delete node03 tags
[root@master ~] # kubectl label nodes node03 disk-
Node/node03 labeled
Look at the node03 tag again:
[root@master] # kubectl get nodes-- show-labels node03
NAME STATUS ROLES AGE VERSION LABELS
Node03 Ready 5d17h v1.15.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node03,kubernetes.io/os=linux
Delete a resource:
[root@master] # kubectl delete deployments. Skx
Deployment.extensions "skx" deleted
Regenerate:
[root@master] # kubectl apply-f skx1.yaml
Deployment.extensions/skx unchanged
You can't find it, you can't get up:
[root@master] # kubectl get pod-o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
Skx-55c4dc6dbc-8rpl6 0/1 Pending 0 11s
Skx-55c4dc6dbc-c2blp 0/1 Pending 0 11s
Skx-55c4dc6dbc-zk7gw 0/1 Pending 0 11s
[root@master] # kubectl get pod-n kube-system
NAME READY STATUS RESTARTS AGE
Coredns-5c98db65d4-52cdm 1 52cdm 1 Running 3 5d17h
Coredns-5c98db65d4-sl96w 1/1 Running 4 5d17h
Etcd-master 1/1 Running 3 5d17h
Kube-apiserver-master 1/1 Running 3 5d17h
Kube-controller-manager-master 1/1 Running 3 5d17h
Kube-flannel-ds-amd64-9vnsc 1 bat 1 Running 4 5d17h
Kube-flannel-ds-amd64-tdzrm 1/1 Running 2 5d17h
Kube-flannel-ds-amd64-tvl2q 1/1 Running 5 5d17h
Kube-proxy-492jr 1/1 Running 2 5d17h
Kube-proxy-gccnb 1/1 Running 3 5d17h
Kube-proxy-klznh 1/1 Running 2 5d17h
Kube-scheduler-master 1/1 Running 4 5d17h
View the master log:
[root@master] # kubectl logs-n kube-system kube-scheduler-master
View detailed description information
[root@master ~] # kubectl describe pod skx-55c4dc6dbc-8rpl6
View the log
[root@master] # kubectl logs-n kube-system kube-scheduler-master
View kubectl Log
[root@master ~] # less / var/log/messages | grep kubelet
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.