In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Overview of Kubernetes
Kubernetes is an open source container orchestration engine for Google, which supports automated deployment, large-scale scalability, and application containerization management. When deploying an application in a production environment, multiple instances of the application are usually deployed to load balance application requests.
In Kubernetes, we can create multiple containers and run an application instance in each container, and then manage, discover and access this group of application instances through the built-in load balancing policy, and these details do not need to be manually configured and processed by operation and maintenance personnel.
Characteristics of Kubernetes
Portable: supports public cloud, private cloud, hybrid cloud, multiple cloud (multi-cloud)
Extensible: modular, plug-in, mountable, combinable
Automation: automatic deployment, automatic restart, automatic replication, automatic scaling / expansion
There are two ways to create resources in K8s: command line and YAML file. This blog mainly introduces how to use YAML file. If you need to use command line to create resources, please refer to the basic management of K8s resource object.
The YAML file in Kubernetes is the same as the configuration list, according to personal habits. This blog post is collectively referred to as YAML file!
I. the basis of YAML files
YAML is a language dedicated to configuration files, very concise and powerful. With the understanding of properties, XML, json and other data formats, you will find it more and more useful after getting used to it. In fact, YAML is a combination of most of the markup language features, the integration of new development.
Features of YAML files:
Clear hierarchy and clear structure; easy to use and easy to use; powerful function and rich semantics
Need to pay special attention to: case-sensitive; strict requirements for indentation; 2. YAML files using 1) composition of YAML files
The YAML file in Kubernetes mainly consists of five first-level fields, which are:
ApiVersion:api version information; kind: specify the type to create the resource object; metadata: nested fields within the metadata that define the name and namespace of the resource object; spec: the specification defines what characteristics the resource should have, relying on the controller to ensure that the performance can meet and meet the desired state of the user. Status: displays the current state of the resource. K8s ensures that the current state is infinitely close to the target state to meet user expectations. Represents the current state of the resource; 2) get help in writing YAML files
Although I know what the first-level fields in the YAML file are, I still don't know how to write them. You can get some help with the following command.
[root@master ~] # kubectl api-versions / / get the apiserver version supported by the current cluster [root@master ~] # kubectl api-resources / / get all api resource objects [root@master ~] # kubectl explain deployment// to view the configuration list format of an object in K8s, which fields should be included and how to use the command [root@master ~] # kubectl explain deployment.spec// is very important It can get help at one level and one level. 3) basic format of YAML file [root@master ~] # cat web.yaml kind: Deployment / / specify the resource object to be created apiVersion: extensions/v1beta1 / / specify the API version information corresponding to deployment metadata: name: web / / define the name of deployment spec: replicas: 2 / / specify the number of copies template: metadata: labels: / / specify the label of pod app: web_server spec: containers:-name: nginx / / specify the name of the container in which pod is running image: nginx / / specify the image required to run the container 4) apply create or update [root@master ~] # kubectl apply-f web.yaml// uses "- f" to specify yaml files Generate the required resources based on the content defined in the yaml file
Apply can specify multiple times and update if files are found to be different
5) delete deletes [root@master ~] # kubectl delete-f web.yaml// deletes resources defined in yaml file 6) verify [root@master ~] # kubectl get deployments. Web// looks at the podNAME READY UP-TO-DATE AVAILABLE AGEweb 2 + 22 5m50s [root@master ~] # kubectl describe deployments generated by the web controller. Web// views details of the web controller
The result returned is as follows:
In this way, Kubernetes has generated the pod resources we need from the YAML file!
[root@master ~] # kubectl get pod-o wide / / View pod details NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESweb-d6ff6c799-7jtvd 1 node02 web-d6ff6c799 1 Running 0 17m 10.244.2.2 node02 web-d6ff6c799-7tpdc 1 Running 0 17m 10.244.1.2 node01
K8s cluster internal test access:
3. Create a YAML file so that the service of pod in K8s can be accessed externally [root@master ~] # cat web-svc.yaml kind: ServiceapiVersion: v1metadata: name: web-svcspec: type: NodePort / / specifies that the type is NodePort, which can be accessed by outside, otherwise it is cluster IP by default. Only selector: app: web_server / / access within the cluster must be associated with the tag of the deployment resource object ports:-protocol: TCP port: 80 / / specifies the port targetPort: 80 / / of the Cluster IP to be mapped to. The port nodePort: 31000 in pod is specified. / / specify the port mapped to the host The range is 30000 '32767 [root@master ~] # kubectl apply-f web-svc.yaml / / generate the control file of service (its name has been defined as web-svc in yaml) [root@master ~] # kubectl get svc web-svc / / View service controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEweb-svc NodePort 10.99.32.22 80:31000/TCP 12m//TYPE: NodePort, which can be accessed externally / / PORT: the mapped port is the same as the one we defined
Test access:
Note: any node in the access cluster can access the services provided by pod in the K8s cluster!
4. Implementation principle of underlying load balancer [root@master] # kubectl describe svc web-svc / / View the details of service
The information returned is as follows:
Since Endpoints specifies the IP address of the backend pod, verify it as follows:
[root@master ~] # kubectl get pod-o wide | awk'{print $6}'/ / extract the IP address IP10.244.2.210.244.1.2// of the backend pod same as the result of the above query!
We know that service has the ability of load balancing, so how is it implemented?
In fact, the principle behind it is not so high-end. Kube-proxy uses iptables's forwarding mechanism to achieve load balancing. First, define the target IP as the cluster IP provided by service, and then use the "- j" option to forward it to other iptables rules. Next, verify:
[root@master ~] # kubectl get svc web-svc | awk'{print $3}'/ / first check the cluster IP address of service CLUSTER-IP10.99.32.22 [root@master ~] # iptables-save | grep 10.99.32.22max / check the content related to the cluster IP address in the iptables rules-A KUBE-SERVICES!-s 10.244.0.0amp 16-d 10.99.32.22 tcp 32-p tcp-m comment-- comment " Default/web-svc: cluster IP "- m tcp-- dport 80-j KUBE-MARK-MASQ-A KUBE-SERVICES-d 10.99.32.22 comment 32-p tcp-m comment-- comment" default/web-svc: cluster IP "- m tcp-- dport 80-j KUBE-SVC-3RBUQ3B6P3MTQ3S7 can be seen from the above results When the destination address is the cluster IP It will be forwarded to [root@master ~] # iptables-save in the KUBE-SVC-3RBUQ3B6P3MTQ3S7 rule | grep KUBE-SVC-3RBUQ3B6P3MTQ3S7:KUBE-SVC-3RBUQ3B6P3MTQ3S7-[0:0]-A KUBE-NODEPORTS-p tcp-m comment-- comment "default/web-svc:"-m tcp-- dport 31000-j KUBE-SVC-3RBUQ3B6P3MTQ3S7-A KUBE-SERVICES-d 10.99.32.22x32-p tcp-m comment-- comment "default/web-svc: cluster IP"-m tcp- -dport 80-j KUBE-SVC-3RBUQ3B6P3MTQ3S7-A KUBE-SVC-3RBUQ3B6P3MTQ3S7- m statistic-- mode random-- probability 0.50000000000-j KUBE-SEP-E3SP5QDRAUFB55IC-A KUBE-SVC-3RBUQ3B6P3MTQ3S7- j KUBE-SEP-3T3LUFAKMOTS5BKN// can see its load balancing effect from the query results. Because only two pod are created at the backend, the probability is 0.5
From this, we can see the effect of service load balancing: iptables rules are used by default, and of course other methods can be used, which is not covered here!
To view the details of load balancer, you need to use cluster ip as the entry point!
The most fundamental principle to achieve load balancing is that iptables rules achieve load balancing according to random (random number)!
5. The service is rolled back to the specified version
Through the basic management of K8s resource object using the command line, we can know that the operation of kubernetes version upgrade and rollback is almost the same as docker swarm. The rollback operation can only be rolled back to the previous version.
Build a private repository, use the image to make a custom image (three versions), distinguish according to the content of the home page, upload the custom image to the private repository, because it is too simple, the process is slightly …...
Kubernetes can also be rolled back to the specified version as follows:
[root@master yaml] # cat httpd01.deployment.yaml kind: DeploymentapiVersion: extensions/v1beta1metadata: name: httpdspec: revisionHistoryLimit: 10 / / record historical version information as 10 replicas: 3 template: metadata: labels: app: httpd-server spec: containers:-name: httpd image: 192.168.1.1:5000/httpd:v1 / / three versions are distinguished by image ports: / / this is just a name that has no effect-containerPort: 80 [root@master yaml] # cat httpd02.deployment.yaml kind: DeploymentapiVersion: extensions/v1beta1metadata: name: httpdspec: revisionHistoryLimit: 10 replicas: 3 template: metadata: labels : app: httpd-server spec: containers:-name: httpd image: 192.168.1.1:5000/httpd:v2 / / three versions are distinguished by image ports:-containerPort: 80 [root@master yaml] # cat httpd03.deployment.yaml kind: DeploymentapiVersion: extensions/v1beta1metadata: name: httpdspec: revisionHistoryLimit: 10 replicas: 3 template: metadata: Labels: app: httpd-server spec: containers:-name: httpd image: 192.168.1.1:5000/httpd:v3 / / three versions are distinguished by image ports:-containerPort: 80 [root@master yaml] # kubectl apply-f httpd01.deployment.yaml-- the function of record//--record is to record historical version information [root@master yaml] # kubectl rollout history deployment httpd// version information of viewing history deployment.extensions/httpd REVISION CHANGE-CAUSE1 kubectl apply-- filename=httpd01.deployment.yaml-- record=true//1 Represents the number corresponding to the version You can also see its corresponding yaml [root@master yaml] # kubectl apply-f httpd02.deployment.yaml-- record [root@master yaml] # kubectl apply-f httpd03.deployment.yaml-- record// upgrades twice according to the yaml file [root@master yaml] # kubectl rollout history deployment httpd deployment.extensions/httpd REVISION CHANGE-CAUSE1 kubectl apply-- filename=httpd01.deployment.yaml-- record=true2 kubectl apply-- filename=httpd02.deployment.yaml-- record=true3 kubectl apply -- filename=httpd03.deployment.yaml-- record=true// confirms that the version information of the upgrade has been recorded [root@master yaml] # vim httpd-svc.yamlkind: ServiceapiVersion: v1metadata: name: httpd-svcspec: type: NodePort selector: app: httpd-server ports:-protocol: TCP port: 80 targetPort: 80 nodePort: 31000 [root@master yaml] # kubectl apply-f httpd-svc.yaml// create a svc for easy access testing [root@master yaml] # curl 127 .0.0.1: 31000 hello lvzhenjiang:v3 / / access test page effect [root@master yaml] # kubectl rollout undo deployment httpd-- to-revision=1// rollback to version 1 Using-- to-revision=1, 1 means to view the historical version of the first column number [root@master yaml] # curl 127.0.0.1 curl 31000 / / Test access hello lvzhenjiang: v16, use label to control the location of the pod
If you do not specify the location of the pod, by default, it is done by the scheduler component in K8s, and there can be no human intervention. If the business needs to be specified manually, then the following methods are required:
[root@master yaml] # kubectl label nodes node02 disk=ssd// manually tag node02 with a disk=ssd [root@master yaml] # kubectl get nodes-- show-labels | grep disk=ssd// to view the tags of each node in the cluster (including disk=ssd) node02 Ready 5d22h v1.15.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02 Kubernetes.io/os=linux// can see from the results that only node02 contains this tag [root@master yaml] # vim httpd.yamlkind: DeploymentapiVersion: extensions/v1beta1metadata: name: httpdspec: revisionHistoryLimit: 10 replicas: 3 template: metadata: labels: app: httpd-server spec: containers:-name: httpd image: 192.168.1.1:5000/httpd:v1 ports:-containerPort: 80 nodeSelector: / / specify the tag selector disk: ssd [root@master yaml] # kubectl apply-f httpd.yaml / / generate the required pod resources from the yaml file [root@master yaml] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEShttpd-5895f5548b-6lb97 1 Running 0 12s 10.244.2.8 node02 httpd-5895f5548b-gh8br 1amp 1 Running 0 10s 10.244.2.10 node02 httpd-5895f5548b-llxh7 1 Running 012s 10.244.2.9 node02 / / it can be seen from the query results that all three pod resources are running on the node02 node
The above needs have been realized, but as long as there is human intervention, there will be errors, we have to consider the following situation! If you delete the node02 tag, see what happens!
[root@master yaml] # kubectl label nodes node02 disk-// deletes the disk=ssd tag of node02 [root@master yaml] # kubectl get nodes-- show-labels | grep disk=ssd// verification The tag has been deleted [root@master yaml] # kubectl delete-f httpd.yaml / / delete the original pod resource [root@master yaml] # kubectl apply-f httpd.yaml / / regenerate the pod resource (the tag is still specified in the yaml file) [root@master yaml] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEShttpd-5895f5548b-7w26q 0 / 1 Pending 0 65s httpd-5895f5548b-c6p6s 0bat 1 Pending 0 65s httpd-5895f5548b-v4s5c 0max 1 Pending 065s / / but check that the status of pod is Pending Obviously, the pod in this state is abnormal.
Even if the tag does not exist, the tag is specified in the yaml file. No error will occur when creating the pod resource, but the status of pod is Pending (waiting state). The solution is as follows:
[root@master yaml] # kubectl describe pod httpd-5895f5548b-7w26q// View the details of pod
The result returned is as follows:
You can see from the results that the tag selector does not match the tags of the node in the cluster!
If no error information is found above, you also need to do the following:
[root@master yaml] # kubectl logs-n kube-system kube-scheduler-master// View the log information generated by the Scheduler component [root@master yaml] # less / var/log/messages | grep kubelet// view the information of the kubelet component in the system log / / because kubelet is responsible for managing pod
For a detailed introduction of K8s, it is recommended to refer to the Chinese documentation of K8s.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.