Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the ten-fold realm of Linux container security in Cyber-Security?

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces to you what the ten-fold realm of Linux container security in Cyber-Security is like, the content is very detailed, interested friends can refer to, hope to be helpful to you.

10 layers of Linux container security | Daniel Oh | Senior Specialist Solution Architect at Red Hat

Containers provide a simple way to package applications and deploy them seamlessly from development and test environments to production environments. It helps ensure consistency across a variety of environments, including physical servers, virtual machines (VM), or private or public clouds. Leading organizations quickly adopt containers based on these benefits to easily develop and manage applications that add business value.

Enterprise applications need strong security, and anyone running basic services in a container will ask, "is the container secure?" , "can our application trust the container?"

Protecting the container is very similar to ensuring that any process is running. Before you deploy and run the container, you need to consider the security of the entire solution technology stack. You also need to consider security during the full lifecycle of applications and containers.

Try to enhance the security of containers at different levels, different technology stacks, and different lifecycle stages in these 10 aspects.

1. Container operating system and multi-tenancy

For developers, the container makes it easier to build and upgrade applications, which can be relied upon as an application unit to maximize server resources by deploying enabled multi-tenant applications on shared hosts. Containers can easily deploy multiple applications on a single host and open and close individual containers as needed. In order to take full advantage of this packaging and deployment technology, the operations team needs to run the container environment correctly. The operator needs an operating system that can protect the container at the boundary, isolate the host kernel from the container, and ensure that the container is safe from each other.

Containers are Linux processes that isolate and constrain resources, enabling you to run sandboxie applications in a shared host kernel. You should protect the container in the same way as you secure any running process on the Linux. It is important to relinquish privileges and is still a best practice. A better approach is to create as few privileged containers as possible. The container should run as a normal user, not a root user. Next, secure the container with the multiple levels of security features available in Linux: Linux namespaces, security-enhanced Linux (SELinux), cgroups,capabilities, and secure Computing Mode (seccomp).

two。 Container content (using trusted sources)

What does it mean for the contents of the container when it comes to security? Applications and infrastructure have been made up of off-the-shelf components for some time. Many of them come from open source software, such as Linux operating system, Apache Web server, Red Hat JBoss enterprise application platform, PostgreSQL and Node.js. Various versions of container-based packages are now available, so you don't need to build your own. However, like any code downloaded from an external source, you need to know the origin of the packages, who created them, and whether there is malicious code inside them.

3. Container registration (container image encrypted access)

Your team builds containers based on downloaded public container images, so access management and update downloads are key to management. Container images, built-in images, and other types of binaries need to be managed in the same way. Many private warehouse registration servers support storage container images. Select a private registration server that stores the container mirroring automation policy used.

4. Build process security

In a containerized environment, software construction is a phase of the entire life cycle, and application code needs to be integrated with the runtime. Managing this build process is the key to ensuring the security of the software stack. Adhere to the concept of "build once, deploy everywhere (build once, deploy everywhere)" to ensure that the products in the build process are those deployed in production. This is also important for maintaining the continuous stability of containers, in other words, do not patch running containers; instead, you should rebuild and redeploy them. Whether you work in a highly regulated industry or just want to optimize the work of the team, you need to design the management and construction process of the container image to take advantage of the container layer to achieve control separation so that:

Basic image of operation and maintenance team management

The architecture team manages middleware, runtime, databases, and other solutions

The development team only focuses on the application layer and code

Finally, sign the custom containers to ensure that they are not tampered with between build and deployment.

5. Control what can be deployed in the cluster

To prevent any problems during the build process, or to discover vulnerabilities after deploying an image, you need to add another layer of security to automated, policy-based deployments.

Let's look at the three container mirror layers that build the application: the core layer (core), the middleware layer (middleware), and the application layer (application). If a problem is found in the core image, the image will be rebuilt. Once the build is completed, the image will be pushed to the container platform registration server. The platform can detect a change in the mirror. For builds that depend on this image and have defined triggers, the platform will automatically rebuild the application and integrate the repaired libraries.

Once the build is completed, the image is pushed to the internal registration server of the container platform. Changes to the image in the internal registration server can be detected immediately, and the updated image is automatically deployed through the triggers defined in the application, ensuring that the code running in production is always the same as the most recently updated image. All of these features work together to integrate security into your continuous integration and continuous deployment (CI / CD) process.

6. Container choreography: enhance the security of container platform

Of course, applications are rarely delivered in a single container. Even simple applications usually have a front end, back end, and database. Deploying modern micro-service applications in containers usually means multi-container deployment, sometimes on the same host and sometimes distributed on multiple hosts or nodes, as shown in the figure.

When managing container deployments on a scale, you need to consider:

Which containers should be deployed to which host?

Which host has more capacity?

Which containers need to access each other? How will they discover each other?

How to control access to and management of shared resources, such as network and storage?

How to monitor the health status of the container?

How to automatically expand application capabilities to meet requirements?

How to make developers meet security requirements while providing self-service?

Given the wide range of capabilities of developers and operators, strong role-based access control is a key element of the container platform. For example, the orchestration management server is the central point of access and should be subject to the highest level of security checks. API is the key to large-scale automation of container management, validating and configuring data for containers, services, and replication controllers; performing project validation on incoming requests; and invoking triggers on other major system components.

7. Network isolation

Deploying modern micro-service applications in containers often means deploying multiple containers in multiple nodes. For network defense, you need a way to isolate applications in a cluster.

A typical public cloud service, such as Google Container Engine (GKE), Azure Container Services, or Amazon Web Services (AWS) Container Service, is a single tenant service. They allow you to run containers on the VM cluster you started. To achieve multi-tenant container security, you need a container platform that allows you to select a single cluster and segment traffic to isolate different users, teams, applications, and environments in that cluster.

Through the network namespace, each container collection (called "POD") obtains its own range of IP and port bindings, thus isolating the POD network on the node.

By default, POD from different namespaces (projects) cannot send or receive packets from POD and services of different projects, except for the options described below. You can use these features to isolate developers, test, and production environments in the cluster; however, this extension of IP addresses and ports makes the network more complex. You can invest in some tools to deal with this complexity. The preferred tool is the software defined network (SDN) container platform, which provides a unified cluster network to ensure the communication between the containers of the whole cluster.

8. Storage

Containers are very useful for stateful and stateless applications. Protection storage is a key element to ensure stateful service. The container platform should provide a variety of storage plug-ins, including Network File system (NFS), AWS Elastic Block Stores (EBS), GCE Persistent disk, GlusterFS,iSCSI,RADOS (CEPH), Cinder and so on.

A persistent volume (PV) can be installed on any host supported by the resource provider. Vendors will have different capabilities, and the access mode of each PV can be set to a specific mode supported by a particular volume. For example, NFS can support multiple read / write clients, but a particular NFS PV can be read-only output on the server. Each PV has its own set of access patterns that define performance metrics for a particular PV, such as ReadWriteOnce, ReadOnlyMany, and ReadWriteMany.

9. API Management, Terminal Security and single sign-on (SSO)

Securing applications includes managing application and API authentication and authorization. Web SSO functionality is a key part of modern applications. When developers build their own applications, the container platform can provide a variety of container services for their use.

API is a key component of micro-service applications. Micro-service applications have multiple independent API services, which leads to the expansion of service endpoints, so more governance tools are needed. API management tools are recommended. All API platforms should provide a variety of standard options for API authentication and security that can be used alone or in combination to issue certificates and control access. These options include standard API keys, application ID, key pairs, and OAuth 2.0.

10. Role and access Control Management (Cluster Federation)

In July 2016, Kubernetes 1.3 introduced Kubernetes Federated Cluster. This is an exciting new feature currently available in Kubernetes 1.6 beta.

In public cloud or enterprise data center scenarios, Federation is useful for deploying and accessing application services across clusters. Multi-clustering makes it possible for high availability of applications, such as multiple regions, multiple cloud providers (such as AWS, Google Cloud, and Azure) to achieve common management of deployment or migration.

When managing a cluster federation, you must ensure that the orchestration tools provide the required security in different deployment platform instances. As always, authentication and authorization are key to security-- the ability to securely pass data to applications, no matter where they run, and manage application multi-tenancy in a cluster.

Kubernetes extends clustered federation including support for federated encryption, federated namespaces and object entrances.

This is the end of the ten-fold realm of Linux container security in Cyber-Security. I hope the above content can be helpful to you and learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report