Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summary of kubernetes interview questions

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Note: all the following questions are summarized by yourself. If there are any mistakes, please point out them.

1. What is K8s? Would you please tell me what you know?

A: Kubenetes is an open source system for automatic deployment, auto scaling and management of container applications. The main function is container arrangement in the production environment.

K8S is launched by Google, it comes from the Borg system which has been used by Google for 15 years, and gathers the essence of Borg.

2. What is the composition of K8s architecture?

A: like most distributed systems, a K8S cluster requires at least one master node (Master) and multiple compute nodes (Node).

The master node is mainly used for exposing API, scheduling deployment and node management; the computing node runs a container running environment, usually a docker environment (similar to docker environment and rkt), while running a K8s agent (kubelet) to communicate with master. The computing node also runs some additional components, such as logging, node monitoring, service discovery, and so on. The computing node is the real working node in the k8s cluster.

K8S architecture breakdown:

1. Master node (does not participate in actual work by default):

Kubectl: client command line tool, as the operation entrance of the entire K8s cluster; Api Server: plays the role of "bridge" in the K8s architecture, as the only entry for resource operation, it provides mechanisms such as authentication, authorization, access control, API registration and discovery. Clients communicate with K8s cluster and K8s internal components through Api Server; Controller-manager: responsible for maintaining the state of the cluster, such as fault detection, automatic extension, rolling updates, etc.; Scheduler: responsible for resource scheduling, dispatching pod to the corresponding node nodes according to the predetermined scheduling policy; Etcd: acts as a data center, saving the state of the entire cluster

2. Node node: Kubelet: responsible for maintaining the life cycle of the container, as well as Volume and network management. Generally, it runs on all nodes and is the agent of the Node node. When the Scheduler determines that pod is running on a node, it will send the pod specific information (image,volume) to the kubelet,kubelet of the node to create and run the container based on this information, and return the running status to the master. (auto-repair feature: if a container in a node goes down, it will try to restart the container. If the reboot is invalid, it will kill the pod and then create a new container.) Kube-proxy:Service logically represents multiple pod at the backend. Responsible for providing Service with service discovery and load balancing within cluster (when outsiders access services provided by pod through Service, requests received by Service are forwarded to pod through kube-proxy); container-runtime: software responsible for managing running containers, such as dockerPod: is the smallest unit in K8s cluster. Each pod can run one or more container (containers). If there are two container in a pod, then the USR (user), MNT (mount point) and PID (process number) of container are isolated from each other, and UTS (hostname and domain name), IPC (message queue) and NET (network stack) are shared with each other. I prefer to think of pod as a pea clip, and peas are the container;3 in pod. What's the difference between a container and a host deployment application?

A: the central idea of a container is to start in seconds; once encapsulated and run everywhere; this is an effect that cannot be achieved by host deployment applications, but we should also pay more attention to the data persistence of containers.

In addition, container deployment can isolate services without affecting each other, which is another core concept of containers.

4. Would you please talk about kubenetes's health monitoring mechanism for pod resource objects?

Answer: for the health status detection of pod resource objects in K8s, three types of probe (probes) are provided to perform health monitoring of pod:

1) livenessProbe probe

You can determine whether pod is healthy according to user-defined rules. If the livenessProbe probe detects that the container is unhealthy, kubelet will decide whether to restart according to its restart strategy. If a container does not contain a livenessProbe probe, kubelet will think that the return value of the container's livenessProbe probe is always successful.

2) ReadinessProbe probe

You can also determine whether the pod is healthy according to user-defined rules. If the probe fails, the controller will remove the pod from the endpoint list of the corresponding service, and no requests will be dispatched to this Pod until the next probe is successful.

3) startupProbe probe

Start the check mechanism and apply some slow-start services to avoid being kill off by the above two types of probes for a long time. This problem can also be solved in another way, that is, when defining the above two types of probe mechanisms, the initialization time can be longer.

Each probe method can support the following same check parameters, which can be used to set the control check time:

InitialDelaySeconds: the initial first detection interval, which is used to prevent the health check from failing before the application starts. PeriodSeconds: check interval, how long to perform probe check. Default is 10stelltimeoutSeconds: check timeout duration, and probe fails after timeout is applied. SuccessThreshold: the threshold for successful detection, indicating how many times are healthy and normal. Default is one.

The above two probes support the following three detection methods:

1) Exec: check whether the service is normal by executing the command. For example, use the cat command to check whether an important configuration file in pod exists. If so, it means that pod is healthy. On the contrary, abnormal.

The syntax of the yaml file for Exec probe is as follows:

Spec: containers:-name: liveness image: k8s.gcr.io/busybox args:-/ bin/sh-- c-touch / tmp/healthy; sleep 30; rm-rf / tmp/healthy Sleep livenessProbe: # Select livenessProbe's detection mechanism exec: # execute the following command command:-cat-/ tmp/healthy initialDelaySeconds: 5 # start probing after five seconds of container operation periodSeconds: 5 # the interval between each probe is 5 seconds

In the above configuration file, the detection mechanism is to detect every five seconds after the container has been running for 5 seconds. If the value returned by the cat command is "0", it is healthy, and if it is non-0, it indicates an exception.

2) Httpget: check whether the service is normal by sending a http/htps request. If the returned status code is 200-399, the container is healthy (Note http get is similar to the command curl-I).

The syntax of the yaml file for Httpget probe is as follows:

Spec: containers:-name: liveness image: k8s.gcr.io/liveness livenessProbe: # use livenessProbe mechanism to detect httpGet: # use httpget way scheme:HTTP # specify protocol Https path: / healthz # is also supported to detect whether the healthz web page file in the root directory of the web page can be accessed. Port: 8080 # listening port is 8080 initialDelaySeconds: 3 # container starts probing after 3 seconds. PeriodSeconds: 3 # probing frequency is 3 seconds.

In the above configuration file, the probe method sends a HTTP GET request for the item container. The request is the healthz file under port 8080. Any status code greater than or equal to 200and less than 400indicates success. Any other code represents an exception.

3) tcpSocket: perform TCP check through the IP and Port of the container. If a TCP connection can be established, the container is healthy. This method is similar to the detection mechanism of HTTPget. Tcpsocket health check is suitable for TCP business.

The syntax of the yaml file for tcpSocket probe is as follows:

Spec: containers:-name: goproxy image: k8s.gcr.io/goproxy:0.1 ports:- containerPort: 808 both detection mechanisms are used to establish a TCP connection to port 8080 of the container readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 15 periodSeconds: 20

In the above yaml configuration file, both types of probes are used. 5 seconds after the container starts, kubelet will send the first readinessProbe probe, which will connect to port 8080 of the container. If the probe is successful, the pod will be healthy, and ten seconds later, kubelet will make a second connection.

In addition to the readinessProbe probe, 15 seconds after the container starts, kubelet will send the first livenessProbe probe, still trying to connect to port 8080 of the container, and restart the container if the connection fails.

The result of probe detection is nothing more than one of the following three:

Success:Container passes the check; Failure:Container fails the check; Unknown: no check is performed, so no action is taken (usually we do not define a probe test, which is successful by default).

If you feel that the above is not thorough enough, you can move to its official website document: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

5. How to control the rolling update process?

A:

You can see the parameters that can be controlled during the update through the following command:

[root@master yaml] # kubectl explain deploy.spec.strategy.rollingUpdate

MaxSurge: this parameter controls the rolling update process, and the total number of copies exceeds the upper limit of the expected number of pod. It can be a percentage or a specific value. The default is 1.

(the function of the above parameters is that during the update process, if the value is 3, no matter 3721, run three pod first to replace the old pod, and so on)

MaxUnavailable: this parameter controls the number of Pod not available during a rolling update. (this value has nothing to do with the above value, for example: I have ten pod, but during the update process, I allow at most three of the ten pod to be unavailable, so set the value of this parameter to 3. During the update process, the update process will not stop as long as the number of unavailable pod is less than or equal to 3.)

6. What is the download strategy for images in K8s?

Answer: you can view the explanation of the imagePullPolicy line by saying "kubectl explain pod.spec.containers".

There are three image download strategies for K8s: Always, Never, and IFNotPresent.

Always: always get images from the specified repository when the image tag is latest; Never: disable downloading images from the repository, that is, only use local images; IfNotPresent: download images from the target repository only if there is no corresponding image locally. The default image download policy is: when the image label is latest, the default policy is Always;. When the image label is custom (that is, the tag is not latest), then the default policy is IfNotPresent. 7. What is the status of image? The container needed by Running:Pod has been successfully dispatched to a node and has been run successfully. Pending:APIserver has created a pod resource object and has stored it in etcd, but it has not been scheduled or is still in the process of downloading images in the repository. Unknown:APIserver cannot get the state of the pod object normally, usually because it cannot communicate with the kubelet of its work node. 8. What is the restart strategy for pod?

Answer: you can view the restart policy of pod with the command "kubectl explain pod.spec". (restartPolicy field)

Always: restart whenever the pod object terminates. This is the default policy. OnFailure: restart only if there is an error in the pod object. 9. What is the purpose of a resource object like Service?

Answer: used to provide a fixed unified access interface to the same multiple pod objects, often used for service discovery and service access.

10. What are the commands related to version rollback? [root@master httpd-web] # kubectl apply-f httpd2-deploy1.yaml-- record # run yaml file and record version information [root@master httpd-web] # kubectl rollout history deployment httpd-devploy1 # View the historical version of the deployment [root@master httpd-web] # kubectl rollout undo deployment httpd-devploy1-- to-revision=1 # perform a rollback operation, specifying that the rollback to the version is in the spec field of the yaml file You can write the following options (to limit the maximum number of historical versions of records): spec: revisionHistoryLimit: 5 # this field finds the revisionHistoryLimit line through the kubectl explain deploy.spec command to get 11, what is the purpose of tags and tag selectors?

Tag: when there are more and more resource objects of the same type, for better management, they can be divided into a group according to the label, in order to improve the management efficiency of resource objects.

Tag selector: the query filter for tags. Currently, API supports two kinds of tag selectors:

Based on equivalence, such as "=", "=", "!" (Note: "= =" also means equal, matchLabels field in yaml file); collection-based, such as in, notin, exists (matchExpressions field in yaml file)

Note: in: in this collection; notin: not in this collection; exists: either in (exists) this collection or not (notexists)

The operation logic of using the tag selector:

When using a collection-based tag selector to specify the logical relationship between multiple selectors as an "and" operation at the same time (for example:-{key: name,operator: In,values: [zhangsan,lisi]}, then all resources with these two values will be selected) Using a null tag selector means that each resource object is selected (for example, the key of the tag selector is "A", and both resource objects have the key A, but the value is different. In this case, if you use a null tag selector, then both resource objects will be selected at the same time.) empty tag selector (note that it is not the null value mentioned above, but empty, and no key name is defined) Will not be able to select any resources In a collection-based selector, the values can be empty when using the "In" or "Notin" operation, but if it is empty, the tag selector makes no sense. Two types of tag selector (equivalence-based, set-based writing method): selector: matchLabels: # based on equivalent app: nginx matchExpressions: # based on collection-{key: name,operator: In,values: [zhangsan,lisi]} # key, operator, values these three fields are fixed-{key: age,operator: Exists,values:} # if specified as exists So the value of values must be empty 12. What are the commonly used label categories?

Tag classification can be customized, but in order to make it clear to others, the following categories are generally used:

Version class tags (release): stable (stable version), canary (canary version, which can be called beta version in beta version), beta (beta version); environmental class tags (environment): dev (development), qa (test), production (production), op (operation and maintenance); application class (app): ui, as, pc, sc; architecture class (tier): frontend (front end), backend (back end), cache (cache) Zone label (partition): customerA (customer A), customerB (customer B); quality control level (Track): daily (daily), weekly (weekly). 13. How many ways to view tags?

A: there are three common ways to view:

[root@master ~] # kubectl get pod-- show-labels # View the pod and display the tag content [root@master ~] # kubectl get pod-L env,tier # displays the value of the resource object tag [root@master ~] # kubectl get pod-l env,tier # displays only the pod that matches the key value of the resource object, and "- L" is the command to display all pod14, add, modify, delete tags? # Operation on pod tags [root@master ~] # kubectl label pod label-pod abc=123 # add tags to pod named label-pod [root@master ~] # kubectl label pod label-pod abc=456-- overwrite # modify tags named label-pod [root@master ~] # kubectl label pod label-pod abc- # Delete tags named label-pod [root@master ~] # kubectl get pod-- show-labels# to node section Point label operation [root@master ~] # kubectl label nodes node01 disk=ssd # add disk tags to node node01 [root@master ~] # kubectl label nodes node01 disk=sss-overwrite # modify node node01 tags [root@master ~] # kubectl label nodes node01 disk- # delete node node01 disk tags 15, What are the characteristics of DaemonSet resource objects?

DaemonSet is a resource object that runs on nodes in each K8s cluster, and each node can only run one pod, which is the biggest and only difference between it and the deployment resource object. Therefore, the definition of replicas is not supported in its yaml file, except that it is written in the same way as Deployment, RS and other resource objects.

Its general usage scenarios are as follows:

Do the log collection of each node; monitor the running status of each node; 16. What do you know about the resource object like Job?

A: Job is different from other service containers. Job is a working container (usually used to do one-time tasks). It is not commonly used, and this problem can be ignored.

# ways to improve the efficiency of Job execution: spec: parallelism: 2 # run 2 completions at a time: 8 # run a maximum of 8 template:metadata:17, describe the life cycle of pod? Pending: indicates that pod has been agreed to be created and is waiting for kube-scheduler to select an appropriate node to create, usually preparing for an image; Running: indicates that all containers in pod have been created, and at least one container is running or starting or restarting; Succeeded: indicates that all containers have been successfully terminated and will not be started again; Failed: indicates that all containers in pod have exited in a non-0 (abnormal) state Unknown: indicates that the Pod status cannot be read, usually because kube-controller-manager cannot communicate with Pod. What is the process of creating a pod?

A:

1) the client submits the configuration information of Pod (which can be defined in yaml file) to kube-apiserver

2) after receiving the instruction, Apiserver informs controller-manager to create a resource object

3) Controller-manager stores the configuration information of pod in ETCD data center through api-server

4) when Kube-scheduler detects the pod information, it will start scheduling pre-selection, first filter out the nodes that do not meet the requirements of Pod resource configuration, and then start scheduling and tuning, mainly to select the nodes that are more suitable for running pod, and then send the pod resource configuration sheet to the kubelet component on the node node.

5) Kubelet runs pod according to the resource configuration order sent by scheduler. After the operation is successful, the operation information of pod is returned to scheduler,scheduler and the returned pod health information is stored in etcd data center.

What happens if you delete a Pod?

A: Kube-apiserver will receive the delete instruction from the user. By default, it has 30 seconds to wait for elegant exit, and more than 30 seconds will be marked as dead. When the status of Pod Terminating,kubelet sees that pod is marked as Terminating, it starts to shut down Pod.

The shutdown process is as follows:

1. Pod is removed from service's endpoint list

2. If the pod defines a hook before stopping, it will be called inside the pod. The stop hook generally defines how to end the process gracefully.

3. The process is sent TERM signal (kill-14)

4. When the elegant exit time is exceeded, all processes in Pod will be sent SIGKILL signals (kill-9).

What is the Service of K8s?

A: every time Pod is restarted or redeployed, its IP address changes, which makes it difficult to communicate between pod and pod and external communication. In this case, Service is required to provide a fixed entry for pod.

The Endpoint list of Service is usually bound to a set of pod with the same configuration and distributes external requests to multiple pod by load balancing.

21. How does K8s register for services?

A: after Pod starts, all the Service information of the current environment is loaded so that different Pod can communicate according to the Service name.

22. How can the outbound traffic of K8s cluster access Pod?

A: it can be accessed through Service's NodePort. The same port will be listened on on all nodes, such as 30000, and the traffic of the access node will be redirected to the corresponding Service.

23. What are the ways to persist K8s data?

Answer: 1) EmptyDir (empty directory): a directory on the host is not specified and is directly mapped to the host by the Pod internal security department. Similar to manager volume in docker.

Main usage scenarios:

1) you only need to temporarily save the data on disk, such as in the merge / sort algorithm

2) as the shared storage of two containers, the first content management container can store the generated data in it, while the same webserver container provides these pages to the outside.

Features of emptyDir:

Different containers in the same pod share the same persistence directory. When the pod node is deleted, the volume data will also be deleted. If only the container is destroyed and the pod is still there, the data in the volume will not be affected.

To sum up: the life cycle of emptyDir's data persistence is consistent with the pod used. It is generally used as temporary storage.

2) Hostpath: Mount directories or files that already exist on the host to the container. Similar to the bind mount mount method in docker.

This method of data persistence is rarely used because it increases the coupling between pod and nodes.

Generally speaking, this method is used for data persistence of K8s cluster itself and docker itself. You can refer to apiService's yaml file at: / etc/kubernetes/main … Under the catalog.

3) PersistentVolume (PV for short):

PV based on NFS services can also be based on PV of GFS. Its function is to unify the data persistence directory and facilitate management.

In a yaml file of PV, you can configure the size of the PV

Specify the access mode for PV:

ReadWriteOnce: can only be mounted to a single node by read-write; ReadOnlyMany: can be mounted to multiple nodes by read-only; ReadWriteMany: can be mounted to multiple nodes by read-write.

And specify the recycling policy of pv: recycle: clear the data of PV, and then automatically recycle; Retain: manual recycling; delete: delete cloud storage resources, cloud storage dedicated

# PS: the recycling policy here refers to whether the source files stored under the PV are deleted after the PV is deleted.

If you need to use PV, then there is another important concept: PVC,PVC is the capacity required to apply to PV, and there may be multiple PV,PVC and PV in the K8s cluster. In order to be associated, the defined access pattern must be the same. The defined storageClassName must also be consistent. If there are two PV with the same name and access mode in the cluster, then the PVC will choose to apply to the PV with similar capacity, or randomly.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report