In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The following gives you an overview of Deployment and its usage scenarios, hoping to give you some help in practical application. Load balancing involves many things, and there are not many theories. There are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.
Deployment introduction
Deployment is a concept introduced by kubernetes 1.2 to solve the orchestration problem of Pod. Deployment can be understood as an upgraded version of RC (RC+Reolicat Set). The characteristic is that you can know the deployment progress of Pod at any time, that is, to show the progress of the complete process of creating, scheduling, binding nodes and starting the container of Pod.
Working with scen
Create a Deployment object to generate the corresponding Replica Set and complete the process of creating a copy of Pod.
Check the status of the Deployment to confirm that the deployment action is complete (whether the number of Pod copies reaches the expected value).
Update the Deployment to create a new Pod (such as a scene that mirrors an upgrade).
If the current Deployment is unstable, fall back to the previous Deployment version.
Suspend or restore a Deployment.
Service introduction
Service defines an access entry address for a service, through which the front-end application accesses a group of cluster instances composed of Pod replicas behind it. "seamless docking" is achieved between Service and its back-end Pod replica cluster through Label Selector. RC ensures that the number of Pod replica instances of Service remains at the expected level.
Problems with external systems accessing Service IP type description IP address of Node IPNode node IP address of Pod IPPod IP address environment of Cluster IPService introduces host IP address service master192.168.1.21k8snode01192.168.1.22k8snode02192.168.1.23k8s
The https://blog.51cto.com/14320361/2464655-based experiment continues
First, the simple use of Delpoyment and service 1. Practice writing a yaml file that requires you to use your own private image and three copies. [root@master ~] # vim xgp.yamlkind: DeploymentapiVersion: extensions/v1beta1metadata: name: replicas: 3 template: metadata: labels: app: xgp-server spec: containers:-name: web image: 192.168.1.21:5000/web:v1 (1) execute [root@master ~] # kubectl apply-f xgp.yaml-- record (2) check [root@master ~] # kubectl get pod
(3) visit [root@master ~] # curl 10.244.2.16
(4) update the yaml file, and add a [root@master ~] # vim xgp.yamlkind: DeploymentapiVersion: extensions/v1beta1metadata: name: xgp-webspec: replicas: 4 template: metadata: labels: app: xgp-server spec: containers:-name: web image: 192.168.1.21:5000/web:v1 to execute [root@master ~] # kubectl apply-f xgp.yaml-- recore to check [root@master ~] # kubectl get pod
Add one to the number of copies, and if the copy of the yaml file is 0, the number of copies is still the previous state and will not be updated.
two。 Practice writing a service file [root@master ~] # vim xgp-svc.yamlkind: ServiceapiVersion: v1metadata: xgp-svcspec: selector: app: xgp-server ports:-protocol: TCP port: 80 targetPort: 80 (1) execute [root@master ~] # kubectl apply-f xgp-svc.yaml (2) check [root@master ~] # kubectl get svc
(3) visit [root@master ~] # curl 10.107.119.49
3. Modify yaml file [root@master ~] # vim xgp.yaml kind: DeploymentapiVersion: extensions/v1beta1metadata: name: xgp-webspec: replicas: 3 template: metadata: labels: app: xgp-server spec: containers:-name: web image: 192.168.1.21:5000/web:v1 ports:-containerPort: 80 # prompt port
Note: in the Delpoyment resource object, you can add a Port field, but this field is only for users to view and does not actually take effect.
Execute [root@master ~] # kubectl apply-f xgp.yaml 4.service file mapping port [root@master ~] # vim xgp-svc.yaml kind: ServiceapiVersion: v1metadata: name: xgp-svcspec: type: NodePort selector: app: xgp-server ports:-protocol: TCP port: 80 targetPort: 80 nodePort: 30123 execute [root@master ~] # kubectl apply-f xgp-svc.yaml to check [root@master ~] # kubectl get svc
Visit [root@master ~] # curl 127.0.0.1purl 30123
5. Modify the contents of three pod pages (1) check the pod information [root@master ~] # kubectl get pod-o wide
(2) modify POD page content (the three sets are different) [root@master ~] # kubectl exec-it xgp-web-8d5f9656f-8z7d9 / bin/bash// enter the pod according to the pod name and enter the container to modify the page content root@xgp-web-8d5f9656f-8z7d9:/usr/local/apache2# echo xgp-v1 > htdocs/index.html root@xgp-web-8d5f9656f-8z7d9:/usr/local/apache2# exit visit [root@master ~] # curl 127.0.0.1 kubectl exec 30123
two。 Analyze the principle of k8s load balancing (1) check the exposed IP [root@master ~] # kubectl get svc of service
(2) check the iptabes rules [root@master ~] # iptables-save / / View the configured rules
SNAT:Source NAT (Source address Translation)
DNAT:Destination NAT (destination address translation)
MASQ: dynamic source address translation
(3) according to the exposure IP of service, view the corresponding iptabes rule [root@master ~] # iptables-save | grep 10.107.119.49
[root@master ~] # iptables-save | grep KUBE-SVC-ESI7C72YHAUGMG5S
(4) check whether the IP is consistent [root@master ~] # iptables-save | grep KUBE-SEP-ZHDQ73ZKUBMELLJB
[root@master] # kubectl get pod-o wide
Load balancing implemented by Service: iptables rules are used by default. IPVS
three。 Rollback to the specified version (1) to delete the previously created delpoy and service [root @ master ~] # kubectl delete-f xgp.yaml [root@master ~] # kubectl delete-f xgp-svc.yaml (2) prepare the private images used by the three versions To simulate different images for each upgrade [root@master ~] # vim xgp1.yaml (three file names are different) kind: DeploymentapiVersion: extensions/v1beta1metadata: name: xgp-webspec: revisionHistoryLimit: 10 replicas: 3 template: metadata: labels: app: xgp-server spec: containers:-name: web image: 192.168.1.21:5000/web:v1 (three different versions) Ports:-containerPort: 80
Here three yaml files specify different versions of the image
(3) run three services and record three version information [root@master ~] # kubectl apply-f xgp-1.yaml-- record [root@master ~] # kubectl apply-f xgp-2.yaml-- record [root@master ~] # kubectl apply-f xgp-3.yaml-- record (4) check what version information is available [root@master ~] # kubectl rollout history deployment xgp-web
(5) run the previous service file [root@master ~] # kubectl apply-f xgp-svc.yaml (6) to view the service exposed port [root@master ~] # kubectl get svc
(7) Test visit [root@master ~] # curl 127.0.0.1VR 30123
(8) Roll back to the specified version [root@master ~] # kubectl rollout undo deployment xgp-web-- to-revision=1//. What is specified here is the number of the version information. Visit [root@master ~] # curl 127.0.0.1 to-revision=1// 30123.
Check what version information is available [root@master ~] # kubectl rollout history deployment xgp-web
Number 1 has been replaced by number 2, resulting in a new number 4.
four。 Using label to control the position of pod
By default, scheduler dispatches pod to all available Node, but there are some situations where we want to deploy Pod to a specified Node, such as deploying a Pod with a large number of disk Node to Node; with SSD configured or Pod requiring GPU, which needs to be run on a node with GPU configured.
Kubernetes implements this function through label
Label is a key-value pair. Label can be set for all kinds of resources, and various custom attributes can be added flexibly. For example, execute the following command to mark that k8s-node1 is a node configured with SSD
First, let's tag the node1 node with a ssd tag [root@master ~] # kubectl label nodes node02 disk=ssd (1) View the tag [root@master ~] # kubectl get nodes-- show-labels | grep node02
(2) Delete copy one [root@master ~] # kubectl delete-f xgp-1.yaml deployment.extensions "xgp-web" deleted [root@master ~] # kubectl delete svc xgp-svc (3) modify the yaml file [root@master ~] # vim xgp-1.yaml kind: DeploymentapiVersion: extensions/v1beta1metadata: name: xgp-webspec: revisionHistoryLimit: 10 replicas: 3 template: metadata: labels: app: xgp-server spec: containers :-name: web image: 192.168.1.21:5000/web:v1 ports:-containerPort: 80 nodeSelector: # add node selector disk: ssd # is consistent with the tag (4) execute [root@master ~] # kubectl apply-f xgp-1.yaml to check [root@master ~] # kubectl get pod-o wide
Now pod runs on node02.
(5) Delete the tag [root@master ~] # kubectl label nodes node02 disk- and check [root@master ~] # kubectl get nodes-- show-labels | grep node02
There is no disk tag
5. Small experiment 1) deploy a Deployment resource object using private image v1. The number of replica Pod is required to be 3, and a Service resource object is created to associate with each other. It is specified that all three copies of Pod run on the node01 node and record a version. (1) use label to control the location of pod [root@master ~] # kubectl label nodes node01 disk=ssd (2) write the source yaml file [root@master ~] # vim xgp.yamlkind: DeploymentapiVersion: name: xgp-webspec: replicas: 3 template: metadata: labels: app: xgp-server spec: containers:-name: web image: 192.168.1.21:5000/web:v1 ports: -containerPort: 80 nodeSelector: disk: ssd (3) write the source service file [root@master] # vim xgp-svc.yamlkind: ServiceapiVersion: v1metadata: name: xgp-svcspec: type: NodePort selector: app: xgp-server ports:-protocol: TCP port: 80 targetPort: 80 nodePort: 30123 (4) execute yaml file Create a controller. Execute the service file to create a mapping port [root@master ~] # kubectl apply-f xgp.yaml-- recore [root@master ~] # kubectl apply-f xgp-svc.yaml (5) check the pod node [root@master ~] # kubectl get pod-o wide
(6) record a version [root@master ~] # kubectl rollout history deployment xgp-web > pod.txt
(7) visit
2) according to the above Deployment, upgrade to v2 version and record a version. (1) modify the yaml file image version [root@master ~] # vim xgp.yaml kind: DeploymentapiVersion: extensions/v1beta1metadata: xgp-webspec: replicas: 3 template: metadata: labels: app: xgp-server spec: containers:-name: web image: 192.168.1.21:5000/web:v2 # to two ports:-containerPort: 80 NodeSelector: disk: ssd (2) refresh the yaml file [root@master ~] # kubectl apply-f xgp.yaml-- record (3) access
(4) record a version [root@master ~] # kubectl rollout history deployment xgp-web > pod.txt
3) finally upgrade to v3. At this time, check the Service association and analyze the details of the load balancer of the access traffic. 1) modify the yaml file image version [root@master ~] # vim xgp.yaml kind: DeploymentapiVersion: extensions/v1beta1metadata: name: xgp-webspec: replicas: 3 template: metadata: labels: app: xgp-server spec: containers:-name: web image: 192.168.1.21:5000/web:v3 # to two ports:-containerPort: 80 nodeSelector : disk: ssd (2) refresh the yaml file [root@master ~] # kubectl apply-f xgp.yaml-- record (3) access
(5) analyze the load balancer of access traffic and check the service mapping port.
Use ip as a starting point to analyze the details of load balancing of access traffic
Load balancing implemented by Service: iptables rules are used by default. IPVS
[root@master ~] # iptables-save | grep 10.107.27.229 iptabes / View the corresponding iptabes rules according to the exposure IP of service
[root@master ~] # iptables-save | grep KUBE-SVC-ESI7C72YHAUGMG5S
The load ratio of each node is shown here.
Check whether the IP is consistent [root@master ~] # iptables-save | grep KUBE-SEP-VDKW5WQIWOLZMJ6G
[root@master] # kubectl get pod-o wide
4) Roll back to the specified version v1 and verify it. Roll back to the specified version [root@master ~] # kubectl rollout undo deployment xgp-web-- to-revision=1//. What is specified here is the number of the version information. Visit [root@master ~] # curl 127.0.0.1 to-revision=1// 30123.
Eliminate the wrong train of thought
[root@master ~] # less / var/log/messages | grep kubelet [root@master ~] # kubectl logs-n kube-system kube-scheduler-master [root@master ~] # kubectl describe pod xgp-web-7d478f5bb7-bd4bj
After reading the above overview of Deployment and its usage scenarios, if you have anything else you need to know, you can find out what you are interested in in the industry information or find our professional technical engineer to answer, the technical engineer has more than ten years of experience in the industry.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.