Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Is your Helm safe?

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

I. background

Kubernetes is the most popular and standard container cluster management platform at present, which provides a series of complete functions for containerized applications, such as deployment and operation, resource scheduling, service discovery and dynamic scaling. In Kubernetes, users describe the program rules of the application by using API objects, such as Pod, Service, Deployment, etc., and the definition of these resource objects generally needs to be written into a series of YAML files, and then deployed through the Kubernetes command line tool Kubectl. Because applications usually involve multiple Kubernetes API objects, and to describe these API objects, you may have to maintain multiple YAML files at the same time, so you usually face the following problems when deploying Kubernetes software:

How to manage, edit and update these decentralized Kubernetes application configuration files

How to manage a set of related configuration files as an application

How to distribute and reuse Kubernetes application configuration

The emergence of Helm (https://helm.sh) is to solve the above problems. It is an official package management tool provided by Kubernetes, which mainly describes and manages cloud services by managing packages called Helm Chart. After using Helm, you no longer need to write complex application deployment files, you can find, install, upgrade, roll back and uninstall applications on Kubernetes in a simple way.

In today's commonly used Helm V2 architecture, there is a server-side component called "Tiller". Tiller is a server within a cluster that interacts with Helm clients and connects to Kubernetes API servers. However, because Tiller has root user access, someone can access you to the Kubernetes server without authorization, which poses a great risk.

Rimas Mocevicius, an expert at JFrog and co-founder of Helm, provides an innovative and elegant way to solve this situation, thereby protecting users' Kubernetes clusters.

Second, the application architecture of Helm V2

Starting with Helm v2, Helm's architecture has a server part called The Tiller Server, which is an intra-cluster server that interacts with helm clients and connects to Kubernetes API servers. The server is responsible for the following tasks:

Listen for incoming requests from Helm clients

Combine Chart and configuration to create a release version

Install Chart into Kubernetes and track subsequent versions

Upgrade and uninstall Chart by interacting with Kubernetes

In short, the client is responsible for managing the Chart,Tiller is responsible for managing the release version, and its architecture is shown in the following figure:

By default, execute the following command to install the Tiller deployment to the Kubernetes cluster:

Helm init

You also need to create and deploy a RBAC authorization for Tiller to give it permission to perform tasks. Common RBAC authorizations for Tiller are as follows:

At present, this architecture works very well, providing flexibility and convenience for users, but there are also some security problems. As you can see from the above RBAC file, Tiller is granted the role of cluster-admin, that is, Tiller has full administrative privileges in the user's Kubernetes cluster, which raises the risk of unauthorized access and control of the cluster.

III. Tillerless scheme in Helm V2

In fact, it is not difficult to create a Tillerless architecture in Helm V2, which can provide higher security for the application of Helm. Rimas, an expert at JFrog, gives an optimized solution, and this section explains how it works through a detailed description.

First, both the Helm client and Tiller can be deployed on workstations or run in the CI/CD pipeline without having to install Tiller into the Kubernetes cluster. The example is installed as follows:

Under this installation, the operating architecture of Helm is shown in the following figure:

Under this architecture, Tiller connects to the Kubernetes cluster using the same configuration (kubeconfig) as the Helm client, and stores and manages Helm distribution information according to the specified namespace (namespace). Users can create many namespaces in this way and specify which namespace should be used when Tiller starts. User-defined RBAC rules can be stored in a key / configuration mapping in the namespace specified by the run, eliminating the need to create and specify a ServiceAccount for Tiller. Another benefit of this architecture is that it is no longer limited to running only one Tiller instance in a Kubernetes cluster.

Note that under this architecture, the Helm client must be initialized with the following command, otherwise Tiller is still installed into the Kubernetes cluster:

Helm init-client-only

4. Tillerless plug-in in Helm V2

So, is there a more convenient way to install it? Rimas provides you with a simple Helm Tillerless plug-in, see https://github.com/rimusz/helm-tiller.

The installation of plug-ins is very convenient, as follows:

The plug-in provides four simple commands: start,start-ci,stop and run.

4.1 using the Tillerless plug-in locally

For developing or accessing remote Kubernetes clusters locally, use the helm tiller start command:

This command starts Tiller locally and uses the default settings of Tiller to store and manage Helm versions in the kube-system namespace.

You can also specify the namespace used by Tiller with the following command:

The command also opens a new bash shell with default environment variables:

HELM_HOST=localhost:44134

In this way, the Helm client knows how to connect to Tiller.

Now you are ready to deploy or update the release version of Helm.

When all the work is done, you only need to run the following command to close Tiller.

Next, users can repeat the above process to deploy and update releases from other teams by specifying a new namespace.

4.2 use the Tillerless plug-in in the CI/CD pipeline

So how do you use the plug-in in the CI/CD pipeline? There are two ways:

The first is very similar to the above process, except that bash shell with default variables is not started.

The Helm client then knows where to connect to the Tiller without making any changes in the CI pipeline. And execute at the end of the pipeline:

Finish all the work.

The second is also easy to run helm tiller run, which causes Tiller to start / stop before or after the specified command:

For example:

Additional features: the plug-in can also check the version of the installed Helm client and Tiller, and if it does not match, download the correct Tiller version file.

V. Summary

As the official package management tool of Kubenetes, Helm provides users with a convenient and quick way to deploy and manage their own applications in the Kubernetes cluster. However, the Tiller component in Helm V2 architecture not only provides convenience for operation, but also brings security risks.

This paper introduces a solution to implement Helm deployment and application of Tillerless in Helm V2, which not only preserves the flexibility and convenience of Helm V2, but also greatly improves the security of application and management.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report