Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the applications of Kubernetes production environment

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the relevant knowledge of "what are the applications of Kubernetes production environment". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Kubernetes in production environment

Kubernetes is a complex and steep learning curve orchestration tool, but it has rich features. Production operations should be handled as carefully as possible. If you face an internal talent shortage, you can outsource it to a PaaS vendor to provide you with all the best practices. But suppose you manage Kubernetes alone in production. In this case, it is important to focus on best practices, especially with regard to observability, logging, cluster monitoring, and security configuration.

As many of us know, it is not easy to run a container in a production environment. It requires a lot of work and computing resources and so on. There are many orchestration platforms on the market, but Kubernetes has gained great appeal and support from most cloud providers.

All in all-Kubernetes, containerization and microservices are all good infrastructure, but they also pose security challenges. Kubernetes Pod can quickly switch between all infrastructure classes, resulting in increased internal traffic between Pod, causing security risks. In addition, the attack surface of Kubernetes is usually larger. You must take into account the fact that Kubernetes is highly dynamic and that the new environment does not integrate perfectly with older security tools.

Gartner predicts that by 2022, more than 75 per cent of global organisations will run container applications in production, up from less than 30 per cent today. By 2025, more than 85% of global organizations will promote container applications in production, up from less than 35% in 2019. Local cloud applications require a high degree of infrastructure automation, DevOps, and specialized operational skills that are difficult to find in a normal IT organization.

So you must use some of Kubernetes's strategies to apply best practices in security, monitoring, networking, governance, storage, container lifecycle management, and platform selection. Let's take a look at some of Kubernetes's production best practices.

Running Kubernetes in production is not easy; there are several areas to pay attention to.

Are survival probes and ready probes used for health check-ups?

Managing large distributed systems can be complex, especially if something goes wrong, we can't be notified in a timely manner. In order to ensure that the application instance works properly, it is important to set up a Kubernetes health check.

By creating a custom run health check, we can effectively avoid the running of zombie services in distributed systems, which can be adjusted according to the environment and needs.

Readiness- ready probe

The purpose of the ready probe is to let Kubernetes know if the application is ready to serve traffic. Kubernetes will always ensure that after the ready probe passes, the distribution service begins and traffic is sent to the Pod.

Liveness- survival probe

How do you know if your application is alive or dead? The survival probe allows you to do this. If your app dies, Kubernetes will remove the old Pod and replace it with the new Pod.

Resource Management- resource management

It is a good practice to specify resource requests and restrictions for a single container.

Another good practice is to divide the Kubernetes environment into separate namespaces for different teams, departments, applications, and clients.

Kubernetes resource usage

Kubernetes resource usage refers to the number of resources used by the container / pod in production.

Therefore, it is important to pay close attention to the resource usage of pods. One obvious reason is cost, because the higher the utilization of resources, the less resources are wasted.

Resource utilization resource utilization

Ops teams typically want to optimize and maximize the percentage of resources consumed by pods. Resource usage is one of the indicators of the actual optimization of the Kubernetes environment.

You can think that the average CPU and other resource utilization of containers running in the optimized Kubernetes environment is the best.

Enable RBAC

RBAC stands for role-based access control. It is a method used to restrict access and access to users and applications on the system / network.

They introduced RBAC from Kubernetes version 1.8. Use rbac.authorization.k8s RBAC to create authorization policies.

In Kubernetes, RBAC is used for authorization, and with RBAC, you will be able to grant users, accounts, add / remove permissions, set rules, and so on. Therefore, it basically adds an additional security layer to the Kubernetes cluster. RBAC restricts who can access your production environment and cluster.

Cluster provisioning and load balancing

Production-level Kubernetes infrastructure usually needs to consider some key aspects, such as high availability, multi-host, multi-etcd Kubernetes cluster, and so on. The configuration of such clusters usually involves tools such as Terraform or Ansible.

Once the clusters are set up and pods is created to run the application, these pods are equipped with load balancers; these load balancers route traffic to the service. The open source Kubernetes project is not the default load balancer; therefore, it needs to integrate with tools such as NGINX Ingress controller and HAProxy or ELB, or any other tool, to expand Kubernetes's Ingress plug-in to provide load balancing capabilities.

Tag Kubernetes objects

Tags are like key / value pairs attached to an object, such as pods. Tags are used to identify the attributes of an object, which are important and meaningful to the user. An important issue that cannot be ignored when using Kubernetes in production is tags; tags allow batch queries and manipulation of Kubernetes objects. Tags are special in that they can also be used to identify Kubernetes objects and organize them into groups. One of the best use cases for doing this is to group pod according to the application to which they belong. Here, the team can build and have any number of tagging conventions.

Configure network policy

When using Kubernetes, it is critical to set network policies.

A network policy is just an object that enables you to clearly declare and decide which traffic is allowed and which is not. In this way, Kubernetes will be able to block all other unwanted and irregular traffic. Defining and limiting network traffic in our cluster is one of the highly recommended basic and necessary security measures.

Each network policy in Kubernetes defines a list of authorized connections as described above. Whenever any network policy is created, all pod referenced by it are eligible to establish or accept the listed connections. Simply put, a network policy is basically a whitelist of authorized and allowed connections-- a connection, whether to or from the pod, is allowed only if at least one network policy applied to the pod allows.

Cluster monitoring and logging

When using Kubernetes, monitoring deployment is critical. It is even more important to ensure that configuration, performance, and traffic remain secure. Without logging and monitoring, it is impossible to diagnose the problem. To ensure compliance, monitoring and logging become very important.

When monitoring, it is necessary to set up logging capabilities at each layer of the architecture. The generated logs will help us enable security tools, audit capabilities, and performance analysis.

Start with a stateless application

Running stateless applications is much easier than running stateful applications, but as Kubernetes operators grow, that mindset is changing. For teams new to Kubernetes, it is recommended to use stateless applications first.

It is recommended that you use a stateless backend so that the development team can ensure that there are no long-running connections, making it more difficult to scale. With statelessness, developers can also deploy applications more effectively with zero downtime.

It is generally believed that stateless applications can be easily migrated and extended according to business needs

Start automatic expansion and reduction

Kubernetes has three auto-scaling features for deployment: horizontal pod auto-scaling (HPA), vertical pod auto-scaling (VPA), and cluster auto-scaling.

Horizontal pod autoscaler automatically expands the number of deployment, replicationcontroller, replicaset, and statefulset based on perceived CPU utilization.

Vertical pod autoscaling recommends appropriate values for CPU and memory requests and limits, and it can update these values automatically.

Cluster Autoscaler expands and reduces the size of the worker node pool. It adjusts the size of the Kubernetes cluster based on current utilization.

Control the source of image pull

Controls the mirror source that runs all containers in the cluster. If you allow your Pod to pull an image from a public resource, you don't know what's really running in it.

If you extract them from a trusted registry, you can apply policies on the registry to extract secure and authenticated images.

Continuous learning

Constantly evaluate the status and settings of the application to learn and improve. For example, reviewing the historical memory usage of containers can lead to the conclusion that we can allocate less memory and save costs in the long run.

Protect critical services

Using Pod priority, you can determine the importance of setting different services to run. For example, for better stability, you need to make sure that RabbitMQ pod is more important than your application pod. Or your entry controller pods is more important than data processing pods to keep the service available to users.

Zero downtime

Zero downtime upgrades of clusters and services are supported by running all services in HA. This will also ensure higher availability for your customers.

Use pod anti-affinity to ensure that multiple copies of a pod are scheduled on different nodes, thereby ensuring service availability through planned and unplanned cluster node downtime.

Use the pod Disruptions strategy to make sure you have the lowest number of Pod copies at all costs!

This is the end of the content of "what are the applications of Kubernetes production environment". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report