In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail about the quick start of K8s, the content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.
Share the reasons for K8s and how it works through a business development story. Suitable for all technical developers, especially front-end developers.
Zero order
In the second half of last year, I made a transfer and began to come into contact with kubernetes. Although my understanding of K8s is still very incomplete, I would like to share some of my gains very much. I hope this article can help you to have an entry-level understanding of K8s. If there is something wrong in the article, I would like to ask the old drivers to help us to correct it.
In fact, there are a lot of articles about K8s on the Internet, and the official documents of kubernetes are also very friendly, so to talk about K8s directly, I think I am not as good as some articles on the Internet, so I want to talk about K8s from a different point of view, through a business development story, to talk about how K8s appears and how it works.
This is suitable for all students engaged in technology, especially the front-end students, because the front-end engineering has developed very rapidly in recent years, and I believe that the problems and forms of development currently solved by K8s will also appear in the front-end field over time. After all, the engineering development in different fields is in fact the same goal.
1 the story begins
With the continuous improvement of the living standards of the Chinese people, every household has cars. Xiao Wang predicts that in five years' time, the car scrapping business will develop rapidly, and the state has also issued a new policy "scrapped vehicle Recycling Management measures" in 19 years. The "special industry" attribute of automobile scrapping and recycling has been abolished, and market-oriented competition will be opened up.
Xiao Wang felt that this was a good opportunity to start a business, so he found me and several like-minded partners to start a business and decided to do a platform called "Taobu".
2 story development
Taoca started as a Java application for all in one, deployed on a physical machine (Xiao Wang, when is it now, you need to know about Aliyun). With the development of the business, we found that the machine could hardly bear it, so we upgraded the specification of the server quickly, rising from 64C256G to 160C1920G. Although the cost was a little high, at least there was nothing wrong with the system.
After a year of business development, 160C1920G can no longer handle it and has to carry out service-oriented split and distributed transformation. In order to solve various problems in the process of distributed transformation, we introduced a series of middleware, such as hsf, tddl, tair, diamond, metaq, and so on. After the difficult business architecture transformation, we successfully split the Java application of all in one into several small applications, and re-followed the road of middleware development and going to IOE in Ali.
After the distributed modification, we manage more servers again, and different batches of servers, hardware specifications, operating system versions, and so on are all different, so there are various problems in application operation and operation and maintenance.
Fortunately, there is virtual machine technology, which shields the differences between the underlying hardware and software through virtualization technology. Although the hardware is different, for applications, what you see is the same. But virtualization also produces a lot of performance overhead.
Well, why don't we use docker, because docker is based on linux native technologies such as cgroup, which shields the underlying differences without obvious performance impact, which is really a good thing. And the business delivery based on docker image makes the operation of our CI/CD very easy.
However, with the growth of the number of docker containers, we have to face a new problem, that is, how to schedule and communicate with a large number of docker? After all, with the development of business, Taoca is no longer a small company, we are running thousands of docker containers, and according to the current business development trend, will soon break ten thousand.
No, we must build a system that can automatically manage the server (such as whether it is healthy, how much memory and CPU can be used, etc.), and then choose the best server to create the container according to the CPU and memory required by the container declaration, and also be able to control the communication between the container and the container (such as the internal services of a department. Of course, you don't want containers from other departments to be accessible.
Let's give this system a name, called Container choreography system.
3 container choreography system
So the question is, in the face of a bunch of servers, how do we implement a container orchestration system?
Assuming that we have implemented the choreography system, then part of our server will be used to run the orchestration system, and the rest of the server will be used to run our business container. We call the server running the orchestration system the master node and the server running the business container the worker node.
Since the master node is responsible for managing the server cluster, it must provide relevant management interfaces, one is to facilitate the operation and maintenance administrators to operate on the cluster, and the other is responsible for interacting with the worker node, such as resource allocation, network management and so on.
We call the component that provides the management interface on the master kube apiserver, and we need two clients to interact with apiserver, one for the operation and maintenance administrator of the cluster, the other for the worker node, and we call it kubelet.
Now the operation and maintenance administrator, master node and worker node of the cluster can interact with each other. For example, the operation and maintenance administrator sends a command to master through kubectl, "create 1000 containers with the image of Amoy car user Center version 2.0". After master receives this request, it is necessary to schedule a calculation based on the resource information of the worker node in the cluster. Figure out which worker the 1000 containers should be created on, and then send the creation instructions to the appropriate worker. We call this component responsible for scheduling kube scheduler.
So how does master know the resource consumption on each worker and the operation of the container? This is simple. We can actively report the operation of node resources and containers periodically through kubelet on worker, and then master stores this data, which can be used for scheduling and container management. As for how to store data, we can write files, db, and so on, but there is an open source storage system called etcd, which meets our requirements for data consistency and high availability, as well as simple installation and good performance, so let's choose etcd.
Now that we have all the data that worker nodes and containers are running, there is a lot we can do. For example, we have created 1000 containers using the version 2.0 image of Taoche user Center, five of which are running on the worker node A. If node A suddenly has a hardware failure, causing the node to become unavailable, master will remove A from the available worker node. And we also need to reschedule the containers of 5 user center 2.0 running on this node to other available worker nodes, so that the number of containers of user center 2.0 can be restored to 1000, and we also need to adjust the network communication configuration of related containers, so that the communication between containers is still normal. We call this series of components controllers, such as node controllers, replica controllers, endpoint controllers, and so on, and provide a unified running component for these controllers, called Controller Manager (kube-controller-manager).
So how can master implement and manage network communication between containers? First of all, each container must have a unique ip address, through which each container can communicate with each other, but the containers that communicate with each other may run on different worker nodes, which involves network communication between worker nodes, so each worker node also needs to have a unique ip address, but the communication between containers is carried out through the container ip, and the container does not know the ip address of the worker node. Therefore, the routing and forwarding information of the container ip is needed on the worker node, which can be realized by iptables, ipvs and other technologies. Well, if the container ip changes, or the number of containers changes, the configuration of the relevant iptables and ipvs needs to be adjusted accordingly, so we need a component on the worker node that specifically listens and adjusts the routing and forwarding configuration. We call this component kube proxy (here, for ease of understanding, we will not expand the content introduced into Service).
We have solved the network communication between containers, but when we code, we want to invoke a service through a domain name or vip, rather than through a container ip that may change at any time. Therefore, we need to encapsulate the concept of Service on top of the container ip. The Service can be either a cluster vip or a cluster domain name. For this reason, we also need a DNS domain name resolution service within the cluster.
In addition, although we already have kubectl, we can have a pleasant interaction with master, but if we have a web management interface, this is certainly a better thing. In addition to this, we may also want to see the resource information of the container, the running logs of the components related to the entire cluster, and so on.
Components like DNS, web management interface, container resource information, and cluster logs, which can improve our experience, are collectively referred to as plug-ins.
So far, we have successfully built a container choreography system. Let's briefly summarize the components mentioned above:
Master components: kube-apiserver, kube-scheduler, etcd, kube-controller-manager
Node components: kubelet, kube-proxy
Plug-ins: DNS, user interface Web UI, container resource monitoring, cluster log
These are also the important parts of K8s. Of course, K8s as a production-level container choreography system, each of the components mentioned here can be taken out to talk about a lot of content separately, this article is only a simple introduction, no longer expand to explain.
Container choreography system of 4 Serverless
Although we have successfully implemented a container choreography system and are very comfortable to use it, President Wang of Taoca (no longer Xiao Wang) feels that the company's R & D and operation and maintenance costs on this choreography system are too high. I want to reduce the cost in this area. Boss Wang wondered if there was an orchestration system that would allow employees to focus on business development without paying attention to the operation and maintenance management of the cluster. Mr. Wang and his classmates in the technical circle learned about it and found that the concept of Serverless coincided with his idea, so they were thinking about when to come up with a container choreography system for Serverless.
K8s about the quick start to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.