Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Verify what are the best practices and strategies for Kubernetes YAML

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

It is believed that many inexperienced people are at a loss to verify the best practices and strategies of Kubernetes YAML. Therefore, this article summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

The most common definition of an Kubernetes workload is a file in YAML format. One of the challenges with YAML is that it is quite difficult to express constraints or relationships between manifest files.

What should you do if you want to check that all the images deployed to the cluster are extracted from the trusted image repository? How to prevent deployments without PodDisruptionBudgets from being submitted to the cluster?

Integrated static checks can detect errors and policy violations as they approach the development lifecycle. And because the assurance of effectiveness and security around resource definitions has been improved, you can trust that production workloads follow best practices.

Ecosystems for static inspection of Kubernetes YAML files can be divided into the following categories:

API validator: this type of tool can validate a given YAML manifest against a Kubernetes API server.

Built-in inspectors: this type of tool bundles opinion checks on security, best practices, and so on.

Custom validators: this type of tool allows you to write custom checks in several languages, such as Rego and Javascript.

Verify Deployment

Before you start comparing tools, you should set a benchmark. The following manifest does not follow best practices. There may be some problems. How many problems can you find?

ApiVersion: apps/v1kind: Deploymentmetadata: name: http-echospec: replicas: 2 selector: matchLabels: app: http-echo template: metadata: labels: app: http-echospec: containers:-name: http-echo image: hashicorp/http-echo args: ["- text" "hello-world"] ports:-containerPort: 5678---apiVersion: v1kind: Servicemetadata: name: http-echospec: ports:-port: 5678 protocol: TCP targetPort: 5678 selector: app: http-echo

We will use this YAML file to compare different tools.

You can find the above YAML listing, file base-valid.yaml, and other manifest mentioned in the article in this git repository:

Https://github.com/amitsaha/kubernetes-static-checkers-demo

Manifest describes a web application that always replies to "Hello World" messages at port 5678.

You can deploy the application in the following ways:

Kubectl apply-f hello-world.yaml

You can test it with the following command:

Kubectl port-forward svc/http-echo 8080:5678

You can visit http://localhost:8080 and confirm that the application works as expected. But does it follow best practices?

Let's look down.

Kubeval

Home page: https://www.kubeval.com/

The premise of Kubeval is that any interaction with Kubernetes is through its REST API. Therefore, you can use the API schema to verify that a given YAML input conforms to the schema. Let's look at an example.

You can install kubeval by following the instructions on the project website, and the latest version at the time of writing is 0.15.0. After the installation is complete, let's run it using the manifest discussed earlier:

Kubeval base-valid.yamlPASS-base-valid.yaml contains a valid Deployment (http-echo) PASS-base-valid.yaml contains a valid Service (http-echo)

When successful, the code for kubeval exiting is 0. You can use the following code to verify the exit code:

Echo $? 0

Now, let's test kubeval with another manifest:

ApiVersion: apps/v1kind: Deploymentmetadata: name: http-echospec: replicas: 2 template: metadata: labels: app: http-echospec: containers:-name: http-echo image: hashicorp/http-echo args: ["- text" "hello-world"] ports:-containerPort: 5678---apiVersion: v1kind: Servicemetadata: name: http-echospec: ports:-port: 5678 protocol: TCP targetPort: 5678 selector: app: http-echo

Can you find the problem?

Let's run kubeval:

Kubeval kubeval-invalid.yamlWARN-kubeval-invalid.yaml contains an invalid Deployment (http-echo)-selector: selector is requiredPASS-kubeval-invalid.yaml contains a valid Service (http-echo) # let's check the return valueecho $1

The resource has not been validated. A Deployment that uses the app/v1 API version must contain a selector that matches the Pod tag. The above manifest does not contain selector, and running kubeval against manifest reports an error and a non-zero exit code.

You may wonder what happens when you run kubectl apply-f with the manifest above.

Let's have a try:

Kubectl apply-f kubeval-invalid.yamlerror: error validating "kubeval-invalid.yaml": error validating data: ValidationError (Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors,turn validation off with-- validate=false

This is exactly the mistake that kubeval warned you. You can fix resources by adding selector like this.

ApiVersion: apps/v1kind: Deploymentmetadata: name: http-echospec: replicas: 2 selector: matchLabels: app: http-echo template: metadata: labels: app: http-echospec: containers:-name: http-echo image: hashicorp/http-echo args: ["- text" "hello-world"] ports:-containerPort: 5678---apiVersion: v1kind: Servicemetadata: name: http-echospec: ports:-port: 5678 protocol: TCP targetPort: 5678 selector: app: http-echo

The advantage of a tool like kubeval is that you can find such errors early in the deployment cycle. In addition, you don't need to visit the cluster to run checks-they can be run offline. By default, kubeval validates resources against the latest unpublished Kubernetes API schema. In most cases, however, you may want to run validation based on a specific version of Kubernetes. You can use the flag-kubernetes-version to test a specific version of API:

Kubeval-kubernetes-version 1.16.1 base-valid.yaml

Please note that the version should be Major.Minor.Patch. In the form of. To see which versions are available for validation, check that JSON schema,kubeval on GitHub uses it to perform validation.

If you need to run kubeval offline, you can download schemas and use the-- schema-location flag to use the local directory. In addition to a single YAML file, you can also run kubeval against directories and standard input. You should also know that Kubeval is easy to integrate with your continuous integration pipeline. If you want to include checks before submitting your manifest to the cluster, it may be helpful for kubeval to support three output formats.

Plain text

JSON

Test anything protocol (TAP)

And you can use one of these formats to further parse the output to create a custom summary of the results. However, one limitation of kubeval is that it is currently unable to validate custom resource definitions (CRD). But kubeval can ignore them.

Although Kubeval is an excellent choice for checking and validating resources, note that resources that pass the test are not guaranteed to comply with best practices. For example, using the latest tags in a container image is not considered a best practice. However, Kubeval does not report it as an error, it validates YAML without warning.

What if you want to score YAML and catch violations such as using the latest tags? How to check your YAML files according to best practices?

Kube-score

Home page: https://github.com/zegl/kube-score

Kube-score analyzes the YAML list and scores it based on built-in checks. These checks are based on security recommendations and best practices, such as:

Run the container as a non-root user.

Specify a health check for pods.

Define resource requests and restrictions.

The result of the check can be OK, WARNING, or CRITICAL.

You can try kube-score online or install it locally. At the time of this writing, the latest version is 1.7.0. Let's try to run it with the previous manifest base-valid.yaml:

Apps/v1/Deployment http-echo [CRITICAL] Container Image Tag http-echo-> Image with latest tag Using a fixed tag is recommended to avoid accidental upgrades [CRITICAL] Pod NetworkPolicy The pod does not have a matching network policy Create a NetworkPolicy that targets this pod [CRITICAL] Pod Probes Container is missing a readinessProbe A readinessProbe should be used to indicate when the service is ready to receive traffic. Without it, the Pod is risking to receive traffic before it has booted. It is also used during rollouts, and can prevent downtime if a new version of the application is failing. More information: https://github.com/zegl/kube-score/blob/master/README_PROBES.md[CRITICAL] Container Security Context http-echo-> Container has no configured security context Set securityContext to run the container in a more secure context. [CRITICAL] Container Resources http-echo-> CPU limit is not set Resource limits are recommended to avoid resource DDOS. Set resources.limits.cpu http-echo-> Memory limit is not set Resource limits are recommended to avoid resource DDOS. Set resources.limits.memory http-echo-> CPU request is not set Resource requests are recommended to make sure that the application can start and run without crashing. Set resources.requests.cpu http-echo-> Memory request is not set Resource requests are recommended to make sure that the application can start and run without crashing. Set resources.requests.memory [CRITICAL] Deployment has PodDisruptionBudget No matching PodDisruptionBudget was found It is recommended to define a PodDisruptionBudget to avoid unexpected downtime during Kubernetes maintenance operations, such as when draining a node. [WARNING] Deployment has host PodAntiAffinity Deployment does not have a host podAntiAffinity set It is recommended to set a podAntiAffinity that stops multiple pods from a deployment from being scheduled on the same node. This increases availability in case the node becomes unavailable.

The YAML file passed the kubeval check, but kube-score pointed out several shortcomings.

Readiness probe is missing

Lack of memory and CPU requests and restrictions.

Missing Poddisruptionbudgets

Lack of anti-affinity rules to maximize availability.

The container runs as root.

These are the effective points that you should solve to make your deployment more robust and reliable. The kube-score command outputs a highly readable result that contains all WARNING and CRITICAL violations, which is very good in the development process. If you plan to use it as part of the continuous integration pipeline, you can use the-- output-format ci flag to use more concise output, and it can also print checks at the level of OK:

Kube-score score base-valid.yaml-output-format ci [OK] http-echo apps/v1/Deployment [OK] http-echo apps/v1/Deployment [CRITICAL] http-echo apps/v1/Deployment: (http-echo) CPU limit is not set [CRITICAL] http-echo apps/v1/Deployment: (http-echo) Memory limit is not set [CRITICAL] http-echo apps/v1/Deployment: (http-echo) CPU request is not set [CRITICAL] http-echo apps/v1/Deployment: ( Http-echo) Memory request is not set [CRITICAL] http-echo apps/v1/Deployment: (http-echo) Image with latest tag [OK] http-echo apps/v1/Deployment [CRITICAL] http-echo apps/v1/Deployment: The pod does not have a matching network policy [CRITICAL] http-echo apps/v1/Deployment: Container is missing a readinessProbe [CRITICAL] http-echo apps/v1/Deployment: (http-echo) Container has no configured security context [CRITICAL] http-echo apps/v1/Deployment: No matching PodDisruptionBudget was Found [WARNING] http-echo apps/v1/Deployment: Deployment does not have a host podAntiAffinity set [OK] http-echo v1/Service [OK] http-echo v1/Service

Similar to kubeval, when a CRITICAL check fails, kube-score returns a non-zero exit code, but it also fails when you configure it for WARNINGs. There is also a built-in check to validate resources for different API versions, similar to kubeval. However, this information is hard-coded in kube-score itself, and you cannot choose a different version of Kubernetes. Therefore, if you upgrade your cluster or if you have several different clusters running different versions, this may limit your use of this tool.

Note that there is an open issue that can do this. You can learn more about kube-score on the official website: https://github.com/zegl/kube-score

Kube-score checking is an excellent tool for implementing best practices, but what if you want to customize or add your own rules? Not for the time being. The design of Kube-score is not extensible and you cannot add or adjust policies. If you want to write custom checks to comply with your organizational policies, you can use one of the next four options-config-lint, copper, conftest, or polaris.

Config-lint

Config-lint is a tool designed to validate configuration files written in YAML, JSON, Terraform, CSV, and Kubernetes manifest. You can install it using the instructions on the project website:

Https://stelligent.github.io/config-lint/#/install

At the time of this writing, the latest version is 1.5.0.

Config-lint does not have a built-in check for Kubernetes manifest. You must write your own rules to perform validation. These rules are written as YAML files, called rule sets, with the following structure:

Version: 1description: Rules for Kubernetes spec filestype: Kubernetesfiles:-"* .yaml" rules: # list of rules

Let's take a closer look. The type field indicates what type of configuration you will check with config-lint-- usually Kubernetes manifest.

The files field accepts a directory as input in addition to a single file.

The rules field is where you can define custom checks. For example, you want to check that the images in Deployment are always extracted from trusted image repositories such as my-company.com/myapp:1.0. The config-lint rules that implement this check can look like this:

-id: MY_DEPLOYMENT_IMAGE_TAG severity: FAILURE message: Deployment must use a valid image tag resource: Deployment assertions:-every: key: spec.template.spec.containers expressions:-key: image op: starts-with value: "my-company.com/"

Each rule must have the following properties.

Id-- this is the unique identification of the rule.

Severity-- it must be one of FAILURE, WARNING, and NON_COMPLIANT.

If message-- violates a rule, the contents of the string will be displayed.

Resource-- the type of resource to which you want this rule to be applied.

A list of conditions under which assertions-- will evaluate the specified resource.

In the above rule, every assertion checks whether the Deployment (key:spec.templates.spec.contains) in each container uses trusted images (that is, images that start with "my-company.com/").

The complete rule set looks like this:

Version: 1description: Rules for Kubernetes spec filestype: Kubernetesfiles:-"* .yaml" rules:-id: DEPLOYMENT_IMAGE_REPOSITORY severity: FAILURE message: Deployment must use a valid image repository resource: Deployment assertions:-every: key: spec.template.spec.containers expressions:-key: image op: starts-with value: "my-company.com/"

If you want a test check, you can save the rule set as check_image_repo.yaml.

Now, let's verify the base-valid.yaml file.

Config-lint-rules check_image_repo.yaml base-valid.yaml [{"AssertionMessage": "Every expression fails: And expression fails: image does not start with my-company.com/", "Category": "", "CreatedAt": "2020-06-04T01:29:25Z", "Filename": "test-data/base-valid.yaml", "LineNumber": 0, "ResourceID": "http-echo", "ResourceType": "Deployment" "RuleID": "DEPLOYMENT_IMAGE_REPOSITORY", "RuleMessage": "Deployment must use a valid image repository", "Status": "FAILURE"}]

It failed. Now, let's consider the following manifest and a valid image repository:

ApiVersion: apps/v1kind: Deploymentmetadata: name: http-echospec: replicas: 2 selector: matchLabels: app: http-echo template: metadata: labels: app: http-echospec: containers:-name: http-echo image: my-company.com/http-echo:1.0 args: ["- text", "hello-world"] ports:-containerPort: 5678

Run the same check using the above manifest and no violations will be reported:

Config-lint-rules check_image_repo.yaml image-valid-mycompany.yaml []

Config-lint is a promising framework that allows you to write custom checks for Kubernetes YAML manifest using YAML DSL. But what if you want to express more complex logic and checking? Is YAML too restrictive? What if you could express these checks in a real programming language?

Copper

Home page: https://github.com/cloud66-oss/copper

Copper V2 is a framework that uses custom checks to validate listings-- just like config-lint. However, Copper does not use YAML to define checks. Instead, the tests are written in JavaScript, and Copper provides a library with basic helpers to help read Kubernetes objects and report errors.

You can install Copper according to the official documentation. At the time of writing, the latest version is 2.0.1:

Https://github.com/cloud66-oss/copper#installation

Like config-lint, Copper has no built-in checks. Let's write a check to ensure that the deployment can only pull container images from trusted repositories such as my-company.com. Create a new file check_image_repo.js with the following contents:

$.forEach (function ($) {if ($.kind = 'Deployment') {$.spec.template.spec.containers.forEach (function (container) {var image = new DockerImage (container.image)) If (image.registry.lastIndexOf ('my-company.com/')! = 0) {errors.add_error (' no_company_repo', "Image" + $. Metadata.name + "is not from my-company.com repo", 1)});}})

Now, to run this check against our base-valid.yaml manifest, use the copper validate command:

Copper validate-in=base-valid.yaml-validator=check_image_tag.jsCheck no_company_repo failed with severity 1 due to Image http-echo is not from my-company.com repoValidation failed

As you might expect, you can write more complex checks, such as validating the domain name of Ingress manifest, or denying any Pod running as privilege. Copper has some built-in assistants:

The DockerImage function reads the specified input file and creates an object that contains the following properties:

Name- contains the name of the mirror

Tag- contains mirrored tag

Registry- Image Repository

Registry_url- includes protocol and image repository

Fqin represents the entire fully qualified mirror location.

The findByName function helps to find resources for a given kind and name from the input file.

The findByLabels function helps find the kind and labels provided by the resource.

You can see all the available helpers here:

Https://github.com/cloud66-oss/copper/tree/master/libjs

By default, it loads the entire input YAML file into the $$variable and makes it available in your script (if you used jQuery in the past, you may find this pattern familiar).

In addition to not having to learn a custom language, you can also use the entire JavaScript language to write your checks, such as string interpolation, functions, etc. It is worth noting that the current version of copper embeds the ES5 version of the JavaScript engine instead of ES6. To learn more, visit the project website:

Https://github.com/cloud66-oss/copper

If Javascript is not your preferred language, or if you prefer the language used to query and describe policies, you should look at conftest.

Conftest

Conftest is a testing framework for configuration data that can be used to check and verify Kubernetes manifest. The test is written in a specially built query language, Rego.

You can install conftest by following the instructions on the project website. At the time of this writing, the latest version is 0.18.2:

Https://www.conftest.dev/install/

Like config-lint and copper, conftest does not have any built-in checks. So let's try it by writing a strategy. As in the previous example, you will check whether the container comes from a trusted source.

Create a new directory, conftest-checks, and a file called check_image_registry.rego, as follows:

Package maindeny [msg] {input.kind = = "Deployment" image: = input.spec.template.spec.containers [_] .image not startswith (image, "my-company.com/") msg: = sprintf ("image'% v 'doesn't come from my-company.com repository", [image])}

Now let's run conftest to verify manifest base-valid.yaml:

Conftest test--policy. / conftest-checks base-valid.yamlFAIL-base-valid.yaml-image 'hashicorp/http-echo' doesn't come from my-company.com repository1 tests, 1 passed, 0 warnings, 1 failure

Of course, it fails because the mirror is not trusted. The above Rego file specifies a deny block that is evaluated as a violation when it is true. When you have multiple deny blocks, conftest checks them independently, and the overall result is that a violation of any one block will lead to an overall violation.

In addition to the default output format, conftest also supports JSON, TAP, and table formats with the-output flag, which can be helpful if you want to integrate the report with the existing continuous integration pipeline. To help debug policies, conftest has a handy trace flag that prints traces of how conftest parses specified policy files.

Conftest policies can be published and shared in OCI (Open Container Initiative) repositories as artefacts. The commands push and pull allow you to publish an artifact and extract an existing artefact from a remote repository.

Let's take a look at a demonstration of using conftest push to publish the above strategy to a local docker repository. Start the local docker warehouse using the following command:

Docker run-it-- rm-p 5000 registry

From another terminal, navigate to the conftest-checks directory created above and run the following command:

Conftest push 127.0.0.1:5000/amitsaha/opa-bundle-example:latest

The command should complete successfully with the following information:

2020-06-10 14:25:43 pushed bundle with digest: sha256:e9765f201364c1a8a182ca637bc88201db3417bacc091e7ef8211f6c2fd2609c

Now, create a temporary directory, run the conftest pull command, and download the above bundle to the temporary directory:

Cd $(mktemp-d) conftest pull 127.0.0.1:5000/amitsaha/opa-bundle-example:latest

You will see that there is a new subdirectory policy in the temporary directory that contains the previous push policy files:

Tree. └── policy └── check_image_registry.rego

You can even run tests directly from the warehouse:

Conftest test-- update 127.0.0.1:5000/amitsaha/opa-bundle-example:latest base-valid.yaml..FAIL-base-valid.yaml-image 'hashicorp/http-echo' doesn't come from my-company.com repository2 tests, 1 passed, 0 warnings, 1 failure

Unfortunately, DockerHub is not yet one of the supported image repositories. However, if you are using Azure Container Repository (ACR) or running your Container Warehouse, you may pass the test.

The artefact format is the same as that used by the Open Policy Agent (OPA) binding, which makes it possible to run tests from existing OPA bindings using conftest.

You can learn more about sharing strategies and other features of conftest on the official website:

Https://www.conftest.dev/

Polaris

Home page: https://github.com/FairwindsOps/polaris

The last tool to be explored in this article is polaris. Polaris can be installed inside the cluster or can be used as a command-line tool to statically analyze Kubernetes manifest. When running as a command-line tool, it includes several built-in checks covering areas such as security and best practices, similar to kube-score. In addition, you can use it to write custom checks like config-lint, copper, and conftest. In other words, polaris combines the best of two categories: built-in and custom inspectors.

You can install polaris command line tools by following the instructions on the project website. At the time of writing, the latest version is 1.0.3:

Https://github.com/FairwindsOps/polaris/blob/master/docs/usage.md#cli

After the installation is complete, you can run polaris against base-valid.yaml manifest using the following command:

Polaris audit--audit-path base-valid.yam

The above command prints a string in JSON format detailing the checks run and the results of each test. The structure of the output is as follows:

{"PolarisOutputVersion": "0001-01-01T00:00:00Z", "SourceType": "Path", "SourceName": "test-data/base-valid.yaml", "DisplayName": "test-data/base-valid.yaml", "ClusterInfo": {"Version": "unknown", "Nodes": 0, "Pods": 2, "Namespaces": 0, "Controllers": 2} "Results": [/ * long list * /]}

You can get the full output in the link below:

Https://github.com/amitsaha/kubernetes-static-checkers-demo/blob/master/base-valid-polaris-result.json

Similar to kube-score, polaris has found some situations where manifest does not meet the recommended best practices, including:

Lack of pod for health check.

The container image does not have a specified label.

The container runs as root.

No CPU and memory requests and limits are set.

Each examination is classified as the severity of a warning or danger.

To learn more about the current built-in checks, refer to the documentation:

Https://github.com/FairwindsOps/polaris/blob/master/docs/usage.md#checks

If you are not interested in detailed results, pass the flag-format score will print a number in the range of 1-100, which polaris calls a score:

Polaris audit--audit-path test-data/base-valid.yaml-- format score68

The closer the score is to 100, the higher the coincidence is. If you check the exit code of the polaris audit command, you will find that it is 0. To make the polaris audit exit code is non-zero, you can use the other two flags.

The set-exit-code-below-score flag accepts a threshold score in the range of 1-100. when the score is below the threshold, it will exit with an exit code of 4. This is useful when your baseline score is 75 and you want to sound the alarm when the score is below 75.

When any hazard check fails, the-set-exit-code-on-danger flag exits with an exit code of 3.

Now let's see how to define a custom check for polaris to test whether the container images in Deployment come from trusted image repositories. Custom checks are defined in YAML format, and the test itself is described using JSON Schema. The following YAML code snippet defines a new check checkImageRepo:

CheckImageRepo: successMessage: Image registry is valid failureMessage: Image registry is not valid category: Images target: Container schema:'$schema': http://json-schema.org/draft-07/schema type: object properties: image: type: string pattern: ^ my-company.com/.+$

Let's take a closer look:

SuccessMessage is the string that is displayed when the check is successful.

FailureMessage refers to the information that is displayed when the test is unsuccessful.

Category refers to one of the categories-mirroring, health check, security, network, and resources.

Target is a string used to determine the specification object for which the check is directed, which should be one of Container, Pod, or Controller.

The test itself is defined using the JSON schema in the schema object. The check here uses the pattern keyword to match whether the image comes from the allowed repository.

To run the checks defined above, you need to create a Polaris configuration file, as follows:

Checks: checkImageRepo: dangercustomChecks: checkImageRepo: successMessage: Image registry is valid failureMessage: Image registry is not valid category: Images target: Container schema:'$schema': http://json-schema.org/draft-07/schema type: object properties: image: type: string pattern: ^ my-company.com/.+$

Let's analyze this file.

The check field specifies the checks and their severity. Because you want to alert when the mirror is not trusted, checkImageRepo is assigned a danger severity.

The checkImageRepo check itself is then defined in the customChecks object.

You can save the above file as custom_check.yaml and run polaris audit with the YAML manifest you want to verify.

You can test with base-valid.yaml manifest:

Polaris audit--config custom_check.yaml-- audit-path base-valid.yaml

You will find that polaris audit only runs the custom checks defined above, but it is not successful. If you change the container image to my-company.com/http-echo:1.0,polaris, you will report success. The modified manifest is included in the Github repository, so you can test the previous commands against image-valid-mycompany.yaml manifest.

But how do you run both built-in and custom checks? The above configuration file should update all built-in check identifiers, which should look like this:

Checks: cpuRequestsMissing: warning cpuLimitsMissing: warning # Other inbuilt checks.. #.. # custom checks checkImageRepo: dangercustomChecks: checkImageRepo: successMessage: Image registry is valid failureMessage: Image registry is not valid category: Images target: Container schema:'$schema': http://json-schema.org/draft-07/schema type: object properties: image: type: string pattern: ^ my-company.com/.+$

You can see an example of a complete configuration file here:

Https://github.com/amitsaha/kubernetes-static-checkers-demo/blob/master/polaris-configs/config_with_custom_check.yaml

You can test base-valid.yaml manifest with custom and built-in checks:

Polaris audit--config config_with_custom_check.yaml-- audit-path base-valid.yaml

Polaris enhances built-in checks with your custom checks, combining the best of both ways. However, the inability to use a more powerful language, such as Rego or JavaScript, may limit the writing of more complex checks.

While there are many tools to validate, score, and streamline Kubernetes YAML files, it is important to have a mental model to understand how you will design and perform checks. For example, if you want Kubernetes manifest to pass through a pipeline, kubeval can be the first step in such a pipeline because it verifies that the object definition conforms to the Kubernetes API schema. Once this check is successful, you may be able to continue with more detailed testing, such as standard best practices and custom strategies. Kube-score and polaris are the best choices here.

If you have complex requirements and want to customize the details of checks, you should consider copper, config-lint, and conftest. Although both conftest and config-lint use more YAML to define custom validation rules, copper provides you with a real programming language that makes it quite attractive. But should you use one of them, write all the checks from scratch, or should you use Polaris and just write additional custom checks? It depends on the situation.

After reading the above, have you mastered the methods to verify the best practices and strategies of Kubernetes YAML? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report