Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What problems should be paid attention to before upgrading Kubernetes 1.18?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what problems should be paid attention to before upgrading Kubernetes 1.18", interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn what problems you should pay attention to before upgrading Kubernetes 1.18.

Using Service Account Token as a universal authentication method

Kubernetes uses service account to validate services within the cluster. For example, if you want a pod to manage other Kubernetes resources, such as Deployment or Service, you can associate with Service Account and create the necessary roles and role bindings. Kubernetes Service Account (KSA) sends JSON Web Tokens (JWT) to API server to verify them. This makes API server the only source of service account authentication.

So what if the entity needs to authenticate against other services outside the cluster? In order to use its KSA, the external authenticator must contact API server to validate the request. However, API server should not have public access. Because this allows you to use other authentication systems for authentication, which adds complexity. Even if the third-party service is located in a cluster that has access to the API server, it will increase the load.

So a feature (# 1393) was added to Kubernetes 1.18 to enable API server to provide an OpenID Connect discovery document that contains Token's public key and other metadata. The OIDC authenticator can use this data to authenticate token without having to reference API server first.

Configure the HPA rate for a specific Pod

Horizontal Pod Autoscaler (HPA) enables your Kubernetes cluster to automatically respond to high / low traffic. With HPA, you can instruct controller to create more Pod based on CPU peaks, other metrics, or metrics provided by the application. To optimize costs, HPA terminates excess Pod when it is no longer needed (for example, when there is no longer a high load). HPA increases / decreases Pod at a configured rate to avoid generating / destroying fluctuating pod during unstable time. Currently, however, this feature can only be configured at the cluster level. In a typical microservice application, you often have services that are more important than other services. Suppose you host a Web application on Kubernetes that performs the following tasks:

Respond to the end customer's request (front end)

Processing data provided by the client (including performing a large number of CPU operations, such as map-reduce).

Dealing with less important data (e.g., archiving, cleaning, etc.)

It is clear from the above that Task 1 requires pod to scale faster so that the application can handle the increased client traffic quickly and efficiently. In addition, they should scale down very slowly to prevent another traffic peak.

Task 2 requires pod and can be scaled very quickly to respond to the increased amount of data. In mission-critical applications, data processing should not be delayed. However, they should also scale down very quickly, because once they are no longer needed, they consume a lot of resources and cannot be used for other services.

Because of their importance, we can tolerate to some extent the pod belonging to tasks 1 and 2 to respond to false positives. After all, it is better to waste some resources than to lose customers.

The pod that serves Task 3 does not need to be specially arranged because they can be expanded and shrunk in the usual way.

The feature (# 853) is provided in Kubernetes 1.18 to allow auto scaling to be configured through HPA behavior fields. The behavior for scaling is specified in the scaleUp or scaleDown section under the behavior field, respectively.

Define even Pod extension rules at the cluster level

Even Pod Spreading was first introduced in Kubernetes 1.16 to ensure that Pod is scheduled on the availability zone (if you are using a multi-area cluster) with the highest availability and resource utilization. This function works by specifying a topologySpreadConstraints and identifying areas by searching for nodes with the same topologyKey tag. Nodes with the same topologyKey tag belong to the same area. This setting distributes the pod evenly among different areas. However, the disadvantage is that this setting must be applied at the Pod level. Pod without configuration parameters will not be distributed between failure domains.

This feature allows you to define default spreading constraints for Pod that does not provide any topologySpreadConstraints. Pod that has defined this setting will override the global level.

Support for Containerd 1.3 on Windows

When we talk about "Kubernetes", we almost immediately think of Linux. Even in tutorials, most books, and literature, Linux is generally regarded as the de facto operating system running Kubernetes.

However, Microsoft Windows has taken appropriate measures to support Kubernetes to run on Windows Server series products. These include adding support for Containerd runtime version 1.3. Windows Server2019 includes an updated host container service (HCS v2) that enhances control over container management, which may improve Kubernetes API compatibility. However, the current version of Docker (EE18.09) is not yet compatible with Windows HCSv2, and only Containerd can be used. Using the Containerd runtime can achieve better compatibility between the Windows operating system and Kubernetes, and will also provide more functionality. This feature (# 1001) introduces support for Containerd version 1. 3 of Windows and uses it as a container runtime interface (CRI).

Tags that support RuntimeClass and multiple Windows versions in the same cluster

Since Microsoft Windows is actively supporting the various functions of Kubernetes, it is not uncommon to see mixed clusters running on Linux and Windows nodes today. RuntimeClass was introduced as early as Kubernetes 1.12, while Kubernetes 1.14 introduced major enhancements. It allows you to select the container runtime and run a specific pod on it. Now, in Kubernetes 1.18, RuntimeClass supports Windows nodes. So you can select a node to schedule Pod that should only run on Windows, which runs a specific Windows build.

Skip Volume ownership change

By default, when you install volume into a container in a Kubernetes cluster, all file and directory ownership within that volume is changed to the fsGroup value provided. The reason for this is to make the volume readable and writable by fsGroup. However, this kind of behavior is not so popular in some cases. For example:

Some applications, such as databases, are sensitive to file permissions and ownership changes. After loading the volume, these applications may stop starting.

When the volume is large (> 1TB) or contains a large number of files and directories, the chown and chmod operations may be too long. In some cases, starting Pod may cause a timeout.

This feature (# 695) provides the FSGroupChangePolicy parameter, which is set to "always" to maintain the current behavior. When set to OnRootMismatch, it only changes the volume permission if the top-level directory does not match the expected fsGroup value.

Allow Secret and ConfigMap to be immutable

In the early days of Kubernetes, we used ConfigMap to inject configuration data into our containers. If the data is sensitive, Secret is used. The most common way to present data to a container is by mounting a file that contains the data. However, when a change is made to ConfigMap or Secret, the change is immediately passed to all pod where the profile is installed. Maybe this is not the best way to apply changes to a running cluster. Because if there is a problem with the new configuration, we will face the risk of stopping running the application.

When you modify a Deployment, the changes are applied through a rolling update policy, in which a new Pod is created and the old Pod is still in effect until deleted. This policy ensures that if the new Pod fails to start, the application will still run on the old Pod. ConfigMap and Secret take a similar approach by enabling immutability in immutable fields. When an object is immutable, API rejects any changes to it. In order to modify an object, you must delete it and recreate it, and colleagues must recreate all containers that use it. Using Deployment rolling updates, you can ensure that the new Pod works properly in the new configuration before deleting the old pod, to avoid application interruption due to configuration change errors.

In addition, setting ConfigMaps and Secrets immutable saves API server from having to poll for their changes on a regular basis. By enabling the ImmutableEmphemeralVolumes feature gate, you can enable this feature in Kubernetes 1.18 (# 1412). Then set the immutable value to true in the ConfigMap or Secret resource file, and any changes to the resource key will be rejected, thus protecting the cluster from unexpected bad updates.

Use Kubectl debugging to provide users with more troubleshooting features

As a Kubernetes user, when you need to view the running Pod, you will be limited by kubectl exec and kubectl port-forward. In Kubernetes 1.18, you can also use the kubectl debug command. This command allows you to:

Deploy the temporary container to the running Pod. Temporary containers have short declaration cycles, and they usually contain the necessary debugging tools. Because they are started in the same pod, they can access other containers with the same network and file system. This can help you solve or track problems to a great extent.

Restart Pod in place using the modified PodSpec. This allows you to perform operations such as changing the source image or permissions of the container.

You can even start the privileged container in the host namespace. This allows you to troubleshoot node problems.

At this point, I believe you have a deeper understanding of "what problems you should pay attention to before upgrading Kubernetes 1.18". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report