Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the techniques for optimizing the security of Docker images

2025-03-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "what are the skills to optimize the security of Docker images", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "what are the skills to optimize the security of Docker images" this article.

1 preface

When you are new to using Docker, you are likely to create insecure Docker images that make it easy for an attacker to take over the container, or even the entire host, and then infiltrate your company's other infrastructure.

There are many attack vectors that can be abused to take over your system, such as:

The launched application (specified in your Dockerfile's ENTRYPOINT) runs as the root user. This way, once an attacker exploits a vulnerability and gains shell privileges, they can take over the host on which the Docker daemon runs.

Your image is based on an outdated and / or insecure base image that contains (now) well-known security vulnerabilities.

Your image contains tools (such as curl, apt, etc.) that can be used to load malware into a container once an attacker gains some access.

The following chapters explain various ways to optimize your mirror security. They are ranked by importance / influence, which means that the top method is more important.

2 avoid divulging the construction key

The build key is the credential that is needed only when building the Docker image (not at run time). For example, you might want to include in your image a compiled version of an application whose source code is closed and its Git repository is access protected. When building an image, you need to clone the Git repository (which requires building a key, such as the repository's SSH access key), build the application from the source code, and then delete the source code (and key).

"leaking" the build key means that you accidentally bake the key into some layer of your mirror. This situation is serious because anyone who pulls your mirror can retrieve these secrets. This problem stems from the fact that Docker images are built layer by layer in a purely additive way. Files you delete in a layer are only "marked" as deleted, but people who pull your images can still use advanced tools to access them.

You can use one of two methods to avoid divulging the build key.

Multi-stage construction

Docker multi-phase builds (official documentation) have many use cases, such as speeding up your image build or reducing the image size. Other articles in this series will cover other use cases in detail. In short, you can also avoid disclosing the build key through a multi-phase build, as follows:

Create a phase # A, copy the credentials into it, and use them to retrieve other artifacts (such as the Git repository in the example above) and perform further steps (such as compiling an application). The build of phase # A does contain the key of the build!

Create a # B phase where you only copy unencrypted artifacts, such as a compiled application, from the # A phase.

Only publish / push phase # B images

Key of BuildKit

Background knowledge

If you build using docker build, you can actually perform the build with more than one back-end option. The newer and faster backend is BuildKit, and you need to set the environment variable DOCKER_BUILDKIT=1 on Linux to explicitly enable it. Note that BuildKit is enabled by default on Windows/MacOS 's Docker for Desktop.

As explained in the documentation here (read them for more details), the BuildKit build engine supports additional syntax in Dockerfile. To use the build key, put something like this in your Dockerfile:

RUN-mount=type=secret,id=mysecret,dst=/foobar

When the RUN statement is executed, the key will be available to the build container, but the key itself (in this case: / foobar folder) will not be placed in the built image. You need to specify the path to the source file / folder (located on the host) of the key when you run the docker build command, for example:

Docker build-- secret id=mysecret,src=mysecret.txt-t sometag

There's one thing to note, though: you can't use docker-compose up-- build to build an image that requires a key, because Docker-compose doesn't support the-- secret parameter for construction, see GitHub. If you rely on docker-compose builds, use method 1 (multi-phase build).

Digression: do not push the image built on the development machine

You should always build and push images (such as CI/CD pipes) in a clean environment where the build agent clones your repository to a new directory.

The problem with building with a local development machine is that the "working tree" of your local Git repository may be dirty. For example, it may contain key files needed during development, such as access keys to transit or even production servers. If these files are not excluded through .dockerboards, then "COPY.." in Dockerfile. Statements such as these may accidentally cause these keys to be leaked to the final mirror.

3 run as a non-root user

By default, when someone runs your image through "docker runyourImage:yourTag", this container (and your program in ENTRYPOINT/CMD) runs as root (on the container and host). This gives an attacker the following rights to gain shell privileges in your running container using some vulnerability:

Unlimited write access to all directories on the host that are explicitly mounted to the container (because it is root).

Can do everything the Linux root user can do in the container. For example, attackers can install additional tools they need to load more malware, such as through apt-get install (which non-root users cannot do).

If your mirror container is started with docker run-privileged, the attacker can even take over the entire host.

To avoid this, you should run your application as a non-root user (some of the users you create during the docker build process). Place the following statement somewhere in your Dockerfile (usually at the end)

# Create a new user (including a home-directory, which is optional) RUN useradd-create-home appuser# Switch to this userUSER appuser

All commands in Dockerfile that follow the USER appuser statement, such as RUN, CMD, or ENTRYPOINT, will be run under this user. Here are some things to pay attention to:

Before switching to a non-root user, the files you copy to the image through COPY (or files created by some RUN commands) are owned by the root user, so applications running as a non-root user cannot be written. To solve this problem, move the code that creates and switches to non-root users to the beginning of Dockerfile.

If these files were created as root at the beginning of Dockerfile (stored under / root/, not / home/appuser/), then the files that your program expects to be somewhere in the user's home directory (for example, ~ / .cache) may now suddenly disappear from the application's point of view.

If your application listens on a TCP/UDP port, you must use a port greater than 1024. Ports less than or equal to 1024 can only be used as root users, or with some advanced Linux capabilities, but you should not give these capabilities to your container just for this purpose.

4 build and update the system package with the latest basic image

If you are using a base image that contains the entire toolset of a true Linux distribution (such as Debian, Ubuntu, or alpine images), including a package manager, it is recommended that you use that package manager to install all available package updates.

Background knowledge

The base image is maintained by someone who configures the CI/CD pipeline plan to build the base image and pushes it to Docker Hub on a regular basis. You can't control this interval, and it often happens that there are security patches in the package registry of Linux distributions (for example, through apt) before the pipeline pushes the updated Docker image to Docker Hub. For example, even if the base image is pushed once a week, a security update may occur hours or days after the most recent image is released.

Therefore, it is always best to run the package manager commands that update the local package database and install updates, in unattended mode (no user confirmation is required). This command is different for each Linux distribution.

For example, for Ubuntu, Debian, or derivative distributions, use RUN apt-get update & & apt-get-y upgrade

Another important detail is that you need to tell Docker (or any image building tool you use) to refresh the underlying image. Otherwise, if you reference a base image, such as python:3 (and Docker already has one in its local image cache), Docker won't even check if there is a newer version of python:3 on Docker Hub. To get rid of this behavior, you should use this command:

Docker build-pull

This ensures that Docker pulls the update of the image mentioned in the FROM statement in your Dockerfile before building the image.

You should also pay attention to Docker's layer caching mechanism, which makes your image obsolete because the layer of the RUN command is cached until the base image maintainer releases a new version of the base image. If you find that basic images are released less frequently (for example, less than once a week), it's a good idea to rebuild your images on a regular basis (such as once a week) and disable layer caching. You can do this by running the following command:

Docker build-pull-no-cache

5 regular updates of third-party dependencies

The software you write is based on the dependence of a third party, that is, software made by others. This includes:

The basic Docker image under your image, or

Third-party software components that you use as part of your application, such as through pip/npm/gradle/apt/... Installed components.

If these dependencies in your image are out of date, you will increase the attack surface, because outdated dependencies often have exploitable security vulnerabilities.

You can use SCA (Software component Analysis) tools to solve this problem on a regular basis, such as Renovate Bot. These tools (semi) automatically update your declared third-party dependencies to the latest version, such as the list declared in your Dockerfile, Python's requirements.txt, NPM's packages.json, and so on. You need to design your CI pipeline so that changes made by the SCA tool automatically trigger your mirrored re-build.

This automatically triggered mirror reconstruction is particularly useful for projects that are in maintenance-only mode, but the code will still be used by customers in a production environment (customers want it to be safe). During maintenance, you no longer develop new features or build new images, because there are no new commits (made by you) to trigger new builds. However, the commit made by the SCA tool does trigger the mirror build again.

You can find more details about Renovate bot in my related blog posts.

6 scan your image for vulnerabilities

Even if you implement the above recommendations, for example, your image always uses the latest third-party dependencies, it may still be insecure (for example, a dependency has been deprecated). In this case, "insecure" means that one (or more) dependencies have known security vulnerabilities (registered in some CVE databases).

For this reason, you can provide your Docker image with some tool to scan all included files to find this vulnerability. These tools come in two forms:

CLI tools that you explicitly invoke (for example, in the CI pipeline), such as Trivy (OSS, which is very easy to use in the CI pipeline, see the Trivy documentation), Clair (OSS, but more complex to set up and use than Trivy), or Snyk (integrated into Docker CLI through "docker scan", see cheat sheet, but only limited free plans!)

Scanners integrated into the image registry where you push the image, such as Harbor (use Clair or Trivy internally). There are also some commercial products, such as Anchore.

Because these scanners are generic, they also try to overwrite a large number of package registries, so they may not be customized specifically for the programming language or package registry you use in your own project. Sometimes, you should investigate what tools your programming language ecosystem provides. For example, for Python, there is a security tool specifically for Python packages.

7 scan your Dockerfile for violations of best practices

Sometimes, the problem comes from the statements you put in Dockerfile, which are bad practices (but you don't realize it). To do this, you can use tools such as checkov, Conftest, trivy, or hadolint, which are the linter of Dockerfile. In order to choose the right tool, you need to check its default rules / policies. For example, hadolint provides more rules than checkov or conftest because it is specific to Dockerfiles. These tools are also complementary, so it really makes sense to run multiple tools (such as hadolint and trivy) on your Dockerfiles. Be prepared, however, because you need to maintain "ignore files", where rules will be ignored-either because of false positives, or because you are prepared to deliberately break the rules.

8 do not use Docker content trust for Docker Hub

To verify that the underlying image you use is indeed built and pushed by the company behind the image, you can use the Docker content trust (see official documentation) feature. Simply set the DOCKER_CONTENT_TRUST environment variable to "1" when running docker build or docker pull to enable this feature. The Docker daemon will refuse to extract images that are not signed by the publisher.

Unfortunately, the community stopped signing images in this way about a year ago. Even Docker Inc. Also stopped signing the official Docker image in December 2020, and there is no official explanation. The bigger problem is that if you use the command "docker pull docker:latest", you will only download an image that has been out of date for a long time.

You can check out other implementations of mirror signatures, such as cosign (though I haven't tried it yet).

9 scan your own code for security issues

Security problems usually come from other people's code, that is, popular third-party dependencies. Because they are widely used, they are "profitable" among hackers. Sometimes, however, it's your own code. For example, you may accidentally implement the possibility of SQL injection, stack overflow errors, and so on.

To find these problems, you can use the so-called SAST (static Application Security Test) tool. On the one hand, there are some programming language-specific tools (you have to study separately), such as Python's bandit, or Java's Checkstyle/Spotbugs. On the other hand, there are suites of tools that support multiple programming languages and frameworks (some of which are not free / commercial), such as SonarQube (and SonarLint IDE plug-ins for it).

In practice, there are two basic methods for security scanning:

Continuous (automatic) scan: you create a CI job and scan your code each time you push. This can keep your code security at a high level, but you must figure out how to ignore false positives (which is an ongoing maintenance effort). If you use GitLab, you may also find GitLab's free SAST feature interesting.

Irregular (manual) scanning: some security-conscious members of the team run security checks locally, such as once a month or before each release, and view the results manually.

10 use docker-slim to delete unnecessary files

The docker-slim tool can take large Docker images, run them temporarily, analyze which files are actually used in temporary containers, and then generate a new, single-tier Docker image in which all unused files are deleted. This has two benefits:

The image is shrunk.

Mirroring becomes more secure because unwanted tools (such as curl or package managers) are removed.

Please refer to the Docker slim section of my previous article for more details.

11 use the smallest base mirror

The more software (such as CLI tools, etc.) stored in an image, the larger the attack surface. Using a "minimum" mirror is a good practice, and the smaller the better (which is a good advantage anyway), and should include as few tools as possible. The smallest image even exceeds the "optimized volume" image (such as alpine or:-slim, such as python:3.8-slim): the former does not have any package managers. This makes it difficult for attackers to load additional tools.

The safest minimum basic image is SCRATCH, which contains nothing at all. Only when you place self-contained binaries in the image can you start your Dockerfile-- with FROM SCRATCH. These binaries are baked into all dependencies (including C-runtimes).

If SCRATCH is not for you, Google's distroless image can be a good choice, especially if you are building applications for common programming languages such as Python or Node.js, or if you need a minimum Debian base image.

Unfortunately, there are several things to note about the minimum image:

Notes for no release:

Note: if the only customization you need is to "run the code as a non-root user", then there is a default non-root user in each undistributed base image, as detailed here.

It is not recommended to use Google's gcr.io images for specific programming languages, as there is only a latest version tag and a major version tag (such as python's "3" or Node's "12"). You have no control over the specific language runtime version (such as whether to use Python 3.8.3 or 3.8.4, etc.), which destroys the reusability of your image build.

Customizing (and building your own) undistributed images is quite complex: you need to be familiar with Bazel's build system and build your own images.

General considerations for the minimum basic image:

For Docker, you can run a second debug container (it does have a shell and debugging tools, such as alpine:latest) and share the PID namespace of your smallest container, for example, through docker run-it-- rm-- pid=container:

-- cap-add SYS_PTRACE alpine sh

For Kubernetes, you can use short-term containers, as shown in the example here

Debugging containers with a minimum base image is tricky because useful tools (such as / bin/sh) are now missing.

12 use a trusted base image

A trusted image is an image that has been audited by someone (either your own organization or someone else) against, say, a certain level of security. This may be particularly important for regulated industries (banking, aerospace, etc.) with high safety requirements and regulations.

Although you can complete the audit yourself by creating a trusted image from scratch, this is not desirable. Because you (the builder of the image) must make sure that all audit-related tasks are completed and have correct records (such as recording the list of packages in the image, the CVE checks performed and their results, etc.). The task is very arduous. Instead, we recommend that this work be outsourced, using a commercial "trusted registry" that provides a selected set of trusted images, such as RedHat's Universal Base Image (UBI). RedHat's UBI is now also available for free on Docker Hub.

Background knowledge

Images hosted on Docker Hub are not audited. They are provided "as is". They may be unsafe (or even malware), and no one will tell you that. Therefore, using an insecure base image in Docker Hub can also make your image insecure.

In addition, you should not confuse auditing with the Docker content trust mentioned above! Content trust only confirms the identity of the source (the image uploader) and does not confirm any facts related to the security of the image.

13 test whether your mirror image can work with reduced ability

Linux capabilities is a feature of the Linux kernel that allows you to control which kernel features an application can use, such as whether a process can send signals (such as SIGKILL), configure network interfaces, mount disks, or debug processes. For a complete list, see here. In general, the less functionality your application needs, the better.

Anyone who starts your mirror container can give (or take away) these abilities, for example, by calling "docker run-- cap-drop=ALL". By default, Docker gives up all capabilities except those defined here. Your application may not need all of these features.

As a best practice, you can try to start your mirror container, give up all capabilities (using-- cap-drop=ALL) and see if it still works. If not, find out which features are missing and whether you really need them. Then please record what features your image needs (and why), which will give more confidence to the users running your image.

These are all the contents of this article entitled "what are the tips for optimizing the security of Docker images?" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report