In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
K8S advanced application example analysis, in view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
One scrolling to update the image
1 specify the mirror update directly through the command
Command: kubectl set image deployment [x.deployment] Image name = Image version
Example: create a nginx deployment nginx image version 1.10
Cat nginx-deployment.yamlapiVersion: apps/v1beta2kind: Deploymentmetadata: name: nginx-deploymentspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers:-name: nginx image: nginx:1.10 ports:-containerPort: 80kubectl create-f nginx-deployment.yaml
Use curl to access pod IP to view nginx version
Curl-I podIP
Perform the update image operation to update the nginx image version to nginx:1.11
Kubectl set image deployment/nginx-deployment nginx=nginx:1.11
View real-time update status
Kubectl rollout status deployment/x.deployment Note: x.deployment, please match your own deployment
You can also view it from the description information, which records the complete replacement record.
Kubectl describe deploy/x.deployment
It is worth noting that the IP that updates the mirror pod will also change.
2 modify the configuration to update the image version
Command: kubectl edit deploy [x-deployment]
Modify the version of image: nginx:1.10
And then verify it through access.
3 Scroll updates by patching
The example is expanded to 5 copies of kubectl patch deployment [x-deployment]-p'{"spec": {"replicas": 5}}'
4 update the rolling update strategy by patching
Change update policy: add a maximum of 0 kubectl pacth deployment [x-deployment]-p'{"spec": {"strategy": {"rollingUpdate": {"maxSurge": 1, "maxUnavailable": 0}'
5 Canary release (that is, update pause)
Take set image as an example kubectl set image deploy x-deployment image name = image version & & kubectl pause deployment x-deployment continues to update kubectl rollout resume deployment x-deployment while paused during update
6 View rs
Kubectl get rs-o wide
Second, view the historical version
Command: kubectl rollout history deployment [x-deployment]
Check the specified version number to view detailed image information
Command: kubectl rollout history deployment [x-deployment]-- revision=nu--revisoin specifies the historical version serial number three version rollback 1 rollback the last version command: kubectl rollout undo deployment [x-deployment]
You can use describe to view rollback information
2 rollback to the specified version
Command: kubectl rollout undo deployment [x-deployment]-- to-revision=nn is the version serial number checked using kubectl rollout undo history deployment [x-deployment]. If you don't know which version needs to be rolled back, use the command to check it.
You can also use describe to view rollback information
Four automatic capacity expansion and reduction
Automatic capacity expansion and reduction: it is controlled by CPU threshold: minimum quantity and maximum quantity can be specified.
Command: kubectl autoscale deployment x-deployment-- min=n-- max=n-- the minimum number of pod for deployment--min to be automatically expanded by cpu-percent=nx-deployment-- the maximum number of pod for max-- the percentage of cpu-percent cpu threshold value to obtain autoscalekubectl delete hpa delete autoscalekubectl delete hpa x-deployment
Five service and pod
Service publishing is associated through tags, which are usually created that are inaccessible outside the pod and can only be accessed through Podip. Service publishing maps the ports exposed by Pod to the host, which enables external access through the host ip. It is the role of tags that can associate the two.
Type ExternalName, ClusterIP, NodePort, and LoadBalancer
ExternalName accesses external applications in this way, internal dns can parse
ClusterIP service IP is also vip service, followed by pod cluster.
NodePort exposes node node port k8s cluster node can access
The vlbaabs of LoadBalancer cloud service can access servie and pods through cloud native load balancer.
ClusterIP hanldless mode: directly set ClusterIP to none
1 create a nginx deployment before releasing a service to expose the nginx port
Pod deployment:
Cat nginx-deployment.yaml apiVersion: apps/v1beta2kind: Deploymentmetadata: name: nginx-deploymentspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers:-name: nginx image: nginx:1.10 ports:-containerPort: 80kubectl create-f nginx-deployment.yamlcat nginx-service.yamlapiVersion: v1kind: Servicemetadata: Name: nginx-service labels: app: nginxspec: ports:-port: 88 targetPort: 80 selector: app: nginxkubectl create-f nginx-service.yaml
2 View and verify
Kubectl get svc or kubectl get svc nginx-service
You will find that the service generates a virtual VIP, which is the access port of the cluster IP and VIP, and accesses the VIP on the container node: Port test
3 View the Pod and Pod leaked ports associated with the service
Kubectl get ep x-service
4 you can also set the same source address to access the same pod
Kubectl pacth svc myapp-p'{"spec: {" sessionAffinity ":" ClientIp "}"}'
5 Resource records if there is a CoreDns
SVC_NAME.NS_NAME.DOMAIN.LTD.svc.cluster.local. Example: redis svc records redis.default.svc.cluster.local.
Six restart strategy
Three strategies:
Always: always restart the container when it terminates and exits. The default policy is.
OnFailure: restart the container only when the container exits abnormally (the exit status code is not 0).
Never: never restart the container when it terminates and exits.
Define using the restartPolicy field
Example: create a deployment area and run a command in a loop. Enter the container and manually kill the process to see if it is restarted.
Cat testrestart.yamlapiVersion: v1kind: Podmetadata: name: pod-test labels: test: centosspec: containers:-name: hello image: centos:6 command: ["bash", "- c", "while true;do date;sleep 1 done"] restartPolicy: OnFailurekubectl create-f testrestart.yaml
Test steps:
1 find the node where the pod is located
Kubectl get pod-o wide
2 View the process on the node
Ps-ef | grep bash
3 Manual kill
Kill-9 process ID
4 check the number of restarts
Kubectl get pod- o wide check the restart statistics in the RESTART column or kubectl describe pod/pod-test check the Restart Count field Note: whether describe is followed by pod or deployment is determined by kind in the yaml file.
7. Pod Management-Health check
1 Pod health check-up mechanism
There are two types of Probe mechanisms available:
LivenessProbe
If the check fails, the container will be killed, and then the restart will be decided according to Pod's restart policy.
ReadinessProbe
If the check fails, Kubernetes removes Pod from the distribution backend of the service agent
Probe supports the following three inspection methods:
HttpGet sends a HTTP request and returns a 200-400 range status code as successful.
Exec executes the Shell command and returns a status code of 0 as successful.
TcpSocket initiated TCP Socket to establish successfully.
Example: HttpGET is an example of a health check
Vim health-httpget-test.yaml apiVersion: v1kind: Podmetadata: name: nginx-pod labels: app: nginxspec: containers:-name: nginx image: nginx:1.10 ports:-containerPort: 80 livenessProbe: httpGet: path: / index.html port: 80kubectl create-f health-httpget-test.yaml
Test whether the container restarts when a single container returns 404
1 enter the container and delete the index.html file
Kubectl exec nginx-pod-it bash cd / usr/share/nginx/html/rm-rf index.html
2 View description information
Kubectl describe pod/nginx-pod
From the description information, it can be seen that the container has been recreated from the original container of the health check mechanism kill, but we have found that although the container has been recreated, its IP has not changed through kubectl get pod-o wide.
Eight Pod Nodeport management
The Port of K8S is divided into: port, targetPort, nodePort
Port: the port for other Container access of the cluster
Ports exposed by container in TargetPort:Pod
NodePort: node exposes port
The port of the host can be mapped to a pod for access
Example: map the pod port of the container to port 20088 of the host node
Create a Pod:
Cat port.yamlapiVersion: v1kind: Podmetadata: name: nginx10-pod labels: app: nginx10spec: containers:-name: nginx10 image: nginx:1.10 ports:-name: http containerPort: 80 hostIP: 0.0.0.0 protocol: TCP-name: https containerPort: 443 hostIP: 0.0.0.0 protocol: TCPkubectl create-f port.yaml
Create a Service:
Cat nginx-pod-service.yamlkind: ServiceapiVersion: v1metadata: name: nginx-pod-servicespec: type: NodePort selector: app:-port: 8080 targetPort: 80 nodePort: 20088 protocol: TCP name: http-port: 8443 targetPort: 443 nodePort: 28443 protocol: TCP name: httpskubectl create-f nginx-pod-service.yaml
The yaml file in the example defines that ports 20088 and 28443 of the host are mapped to ports 80 and 443 of Pod, respectively, and port 8080 is the VIP in the group.
Test:
1 find the container where the container is located
Kubectl get pod-o wide
2 View svc
Kubectl get svc
3 access to the host IP:20088
Set a fixed IP for the cluster
A cluster needs a vip for load balancing. An inbound ip of dhcp is automatically generated when it is not specified by default. To set it manually, you only need to modify the ymal file and add the clusterIP field. To set the IP segment specified for the apiserver configuration file, it is better not to be the same as other address fields in the cluster.
Create a set of web podcat nginx-deployment.yamlapiVersion: apps/v1beta2kind: Deploymentmetadata: name: nginx-deploymentspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers:-name: nginx image: nginx:1.10 ports:-containerPort: 80
Create the service and set up the cluster IP 10.1.110.111
Cat nginx-service.yamlapiVersion: v1kind: Servicemetadata: name: nginx-service labels: app: nginxspec: ports:-port: 88 targetPort: 80 selector: app: nginx clusterIP: "10.1.110.111" kubectl create-f nginx-service.yaml
Check whether the cluster IP is set up successfully
Kubectl get svc
Test whether WEB can be accessed through the cluster IP on the node
Curl-I 10.1.110.111purl 88
They all asked which nodes by looking at the log.
Kubectl logs-l app=nginx through the tag name
Ten view the environment variables of Pod
Command: kubectl exec [Pod name] env
The use of National Day holiday data volume
There are more than a dozen kinds of data volumes supported by K8S, such as emptyDir, hostPath network data volumes including nsf,iSCSI,Flocker glusterfs RBD cephfs gitRepo secret persistentVolumeClaim projected, and so on.
Today, we will briefly introduce the use of emptyDir, hostPath and nfs in K8s.
1 emptyDir
When Pod is assigned to Node, an empty volume is first created and mounted to a container in Pod. The container in Pod can read and write files in the data volume. When Pod deletes emptyDir from the node, the data will also be deleted.
Cat emptyDir.yamlapiVersion: v1kind: Podmetadata: name: test-pd namespace: kube-systemspec: containers:-image: gcr.io/google_containers/test-webserver name: test-container volumeMounts:-mountPath: / cache name: cache-volume volumes:-name: cache-volume emptyDir: {} kubectl create-f emptyDir.yaml View volume information using kubectl describe pods test-pd-n kube-system
The description information shows that a volume named cache-volume has been created, the type of volume is EmptyDir and its life cycle is based on the life cycle of pod
2 hostPath
A hostpath volume mounts a file or directory on the Node file system to a container in Pod. Mounting this type of directory requires a directory that already exists on the host (it can be a file, folder, or socket)
Cat hostPath.yamlapiVersion: v1kind: Podmetadata: name: test-pdspec: containers:-image: nginx:1.12 name: test-container volumeMounts:-mountPath: / data name: test-volume volumes:-name: test-volume hostPath: path: / home/test type: Directory # directory location on host # path: / data kubectl create-f hostPath.yaml/home/test host directory (required (already exists) / data container details where you need to hang can be viewed using describe
Verify:
Create a file in the / data directory in the container
Kubectl exec test-pd-it-- touch / data/a.txt
Check the corresponding directory of the container host
Ls / home/test/
3 Network data volume NFS
Install NFS
Yum install-y nfs*&&systemctl start nfs
Create a NFS shared directory
Mkdir / home/data
Set up NFS share / home/data
Vim / etc/exports write: / home/data * (rw)
Start nfs
Systemctl start nfs
Authorization
Chmod 777-R / home/data/
Client testing (need to install showmount)
Yum install-y showmount showmount-e nfs server IP
If you can see the shared directory, the nfs network directory can use the
Edit the yaml file
Cat nfs.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-deploymentspec: replicas: 3 template: metadata: labels: app: nginx spec: containers:-name: nginx image: nginx:1.10 volumeMounts:-name: wwwroot mountPath: / var/www/html ports:-containerPort: 80 volumes:-name: wwwroot nfs: Server: 10.10.25.149 path: / home/datakubectl create-f nfs.yaml
Verification
Go to the nfs directory to create a test file
Cd / home/data/touch index.html
Check to see if pod is hung in the directory
Kubectl exec nginx-deployment-8669f48cd4-4lvwd-it-ls / var/www/html
Edit the index.html file
Echo 'hello world' > / home/data/index.html
Check whether the index.html file inside the container is the same as what we edited
Kubectl exec nginx-deployment-8669f48cd4-4lvwd-it-cat / var/www/html/index.html
Verify that Pod data exists when it is destroyed
Kubectl delete-f nfs.yaml then look at the file cat / home/data/index.html Twelve externalIPsexternalIPs created by svc to listen on the port on the specified node
Create deployment
# cat nginx10-deployment.yaml apiVersion: apps/v1beta2kind: Deploymentmetadata: name: nginx-deploymentspec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers:-name: nginx image: nginx:1.10 ports:-containerPort: 80kubectl create-f nginx10-deployment.yaml
Create the service and set the externalIPs:
Cat nginx-service.yaml kind: ServiceapiVersion: v1metadata: name: nginx-deploy-servicespec: type: NodePort selector: app: nginx ports:-port: 8080 targetPort: 80 protocol: TCP name: http externalIPs:-10.10.25.150
View svc
Kubectl get svc
At this point, it can be accessed by defining the IP and port of the node node. Of course, serviceIP access in the cluster is not affected.
Thirteen Pod resource restrictions
Define resource limits for yaml files pod
ApiVersion: v1kind: Podmetadata: name: nginx-pod labels: app: nginxspec: containers:-name: nginx image: nginx resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" this is the answer to the case analysis of K8S advanced applications. I hope the above content can be helpful to you. If you still have a lot of questions to solve, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.