In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the problems of Kubernetes architecture, which has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, let the editor take you to understand it.
The Kubernetes architecture is ideal for organizations with a certain service size, but it may be too complex for others.
The open source container orchestration platform Kubernetes has become the de facto solution for anyone deploying containerized applications in a production environment. There are many reasons for this, including the high degree of reliability, automation, and scalability provided by Kubernetes. However, I sometimes think that the architecture of Kubernetes has been exaggerated. Although it is more than 6 years old now, it still has all kinds of shortcomings. Some of these are inherent in Kubernetes itself, while others are the products of ecosystems developed around the platform.
Before you join Kubernetes, consider the following questions about the open source container orchestration platform.
Kubernetes is designed for companies with a certain network size.
First, the Kubernetes architecture is built for companies that need to manage very large-scale application environments.
If you are Google (whose Borg orchestrator laid the foundation for later open source Kubernetes projects), then Kubernetes is a great tool. This is also true if you are a Netflix, Facebook, Amazon or other network-sized company with dozens of data centers and hundreds of applications and services.
However, if you are a smaller organization with one data center and more than a dozen applications to deploy, the Kubernetes architecture can be said to be redundant. It's like using a bulldozer to dig the grass in the backyard. Unless it is used on a large scale, the effort required to configure and manage it is not worth it.
This is not to say that Kubernetes is never suitable for small-scale deployments. I think it is moving in that direction. But now every time I start a Kubernetes cluster and deploy one or two applications on a few servers, I'm sure it's better to use a simpler solution.
Fragmented Kubernetes ecology
Another problem with the Kubernetes architecture is that there are too many Kubernetes distributions-and too many different tools, philosophies, and perspectives associated with it-that the Kubernetes ecosystem is highly fragmented.
Of course, to some extent, any open source ecosystem will break down.
For example, Red Hat Enterprise Linux and Ubuntu Linux have different package managers, management tools, and so on. However, Red Hat and Ubuntu have more similarities than differences. If you are a system administrator for Red Hat, if you want to migrate to Ubuntu, you don't need to spend six months teaching yourself new tools.
I don't think Kubernetes can say that. If you are using OpenShift now, but want to switch to VMware Tanzu, you will face a very steep learning curve. Although the two Kubernetes distributions use the same underlying platform, Kubernetes, the methods and tools they add are very different.
There is a similar split in cloud-based Kubernetes services. The user experience and management tool suite of Google's Kubernetes engine (GKE) is very different from AWS cloud platforms such as Amazon EKS.
Of course, this is not the fault of the Kubernetes architecture itself. This is the result of attempts by different vendors to distinguish Kubernetes products. But from the perspective of Kubernetes users, this is still a real problem.
There are too many components in Kubernetes
We talk about Kubernetes as if it were a single platform, but in fact it contains more than six different components. This means that when you install or update Kubernetes, you have to deal with each part separately. And most Kubernetes distributions lack good automation solutions to do these things.
Of course, Kubernetes is indeed a complex platform that requires multiple parts to work. But compared with other complex platforms, Kubernetes does a particularly poor job of integrating its parts into an easy-to-manage whole. A typical Linux distribution is also made up of many different software. But you can install and manage them in a centralized, streamlined way. This is not the case with Kubernetes's architecture.
Kubernetes does not automatically guarantee high availability
One of the most common reasons for using Kubernetes is that it magically manages your applications in a way to ensure that they never fail, even if part of your infrastructure fails.
The Kubernetes architecture does have the ability to intelligently and automatically determine where to place workloads in a cluster. However, Kubernetes is not a panacea for high availability. For example, it works in a production environment with only one primary node, which causes the entire cluster to crash. (if the primary server fails, the entire cluster will basically stop working.)
Kubernetes also does not automatically guarantee a reasonable allocation of resources among the different workloads running in the cluster. To do this, you need to set resource quotas manually.
It is difficult to control Kubernetes manually
Although Kubernetes requires a lot of manual intervention to provide high availability, it can make manual control quite difficult if you really want manual control.
To be sure, there are ways to modify the Kubernetes execution probe time to determine whether the container is executing correctly or to force the workload to run on a specific server in the cluster. However, the Kubernetes architecture was not designed to allow administrators to make these changes manually. It assumes that you always like to use default values.
This makes sense because (as mentioned above) Kubernetes was first created for web-scale deployments. If you have thousands of servers and hundreds of workloads, you don't need to configure a lot of things manually. However, if you are a small enterprise and want more control over the workload structure within the cluster, it is difficult for Kubernetes to do so.
There are some challenges in Kubernetes monitoring and performance optimization
Kubernetes tries to keep your workload running (although, as mentioned above, its ability to do so depends on factors such as how many hosts you set up and how you structure resource allocation).
However, the Kubernetes architecture does not help you monitor workloads or ensure their optimal performance. It doesn't alert you when something goes wrong, and it's not easy to collect monitoring data from the cluster. Most of the monitoring dashboards included with the Kubernetes distribution also do not provide in-depth visibility into the environment. There are third-party tools that can give you visibility, but if you want to run Kubernetes, these are another things you must build, learn, and manage.
Similarly, Kubernetes is not good at helping you optimize costs. If only 20% of the capacity of the server in the cluster is used, it will not notify you, which may mean that you are wasting resources on overprovisioned infrastructure. Here, third-party tools can help you deal with similar challenges, but they add more complexity.
Kubernetes reduces everything to code.
In Kubernetes, you need to write code to accomplish anything. Typically, this code is in the form of a YAML file, which must then be applied to the Kubernetes command line.
Many people see the requirement that everything in the Kubernetes architecture is code as a feature rather than a bug. However, of course I understand the value of using a single method and tool (which means YAML files) to manage the entire platform, but I do hope that Kubernetes will provide other options for people who need them.
Sometimes I don't want to write a long YAML file (or extract a YAML file from GitHub and manually adjust the random parts of it to suit my environment) to deploy a simple workload. I wish I could press a button or run a simple command (I mean that the kubectl command does not require 12 parameters, many of which are configured with mysterious data strings that must be copied and pasted) there is no way to do this by doing some simple operations in Kubernetes. But for now, it can't be achieved.
Kubernetes wants to control everything.
My last complaint about Kubernetes is that it is not designed to work well with other types of systems. It wants to be the only platform you use to deploy and manage applications.
It would be great if all your workloads were containerized and could be orchestrated by Kubernetes. But what if your legacy application can't run as a container? Or what if you want to run part of the workload on the Kubernetes cluster and run another part of the workload externally? Kubernetes does not provide native functionality to do such things. It is designed on the assumption that everyone wants to run everything in a container all the time.
Thank you for reading this article carefully. I hope the article "what are the problems of Kubernetes architecture" shared by the editor will be helpful to you? at the same time, I also hope that you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.