In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what are the Kubernetes methods". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn what the Kubernetes methods are.
As the container gradually attracted the attention of the enterprise, the focus gradually shifted to the container orchestration tool. Complex workloads need to be fully scheduled, choreographed, flexibly expanded and managed in the production process. With Docker, it is very easy to manage containers running on host operating systems and their lifecycles. Because containerized workloads run on multiple hosts, we need tools to manage a single container and a single host on it.
The Docker data center, where Mesosphere DC/OS and Kubernetes play an important role. They allow developers and operators to deal with multiple machines as if they were a single machine running on a cluster. The development operations team submits work to the Container orchestration engine (COE) through the application programming interface (API), command line interface (CLI), or professional tools, which manages the life cycle of the application.
The clustered version of COE is delivered as CaaS and the container as a service. Examples of CaaS include Google GCE,RackSpace 's Carina, Amazon EC2 Container Service, Azure Container Service and Joyent Triton.
Kubernetes, an open source cluster management tool and container orchestration engine, is a simplified version of Google's internal data center management tool Borg. In 2015, KubeCon (the first meeting of Kubernetes) celebrated the release of version 1.1 of its new features.
I wrote an article that compares the COE market pattern with the commercial implementation of Hadoop. There are a lot of startups and platforms trying to capture the enterprise market for COE. Kubernetes stands out thanks to the maturity of Google's web-level workload operation experience. Based on my personal experience, I am trying to bring up features that can standardize Kubernetes for containers.
PODs: new virtual machine
Containers and microservices have a unique property-they run only one process at a time, one and only one at a time. It is common for virtual machines to run on full-stack LAMP applications, but the same application has to be split into at least two containers-one running Apache with PHP and the other running MySQL. If you throw Memcached and Redis onto the stack, they also need to run in separate containers.
This pattern changes the configuration. For example, the cache container should be closely related to the web page container. When the web layer is expanded by running additional containers, the cache container also needs to be expanded. When request goes to a web page container, it checks the data settings in the corresponding container cache; if it is not found, the database query is placed in MySQL. This design is called together to pair the web page with the cache container and store them together on the local host.
If Kubernetes is the new operating system, then pod is the new process
In Kubernetes, pod makes it easy to label multiple containers as a single deployment unit. They collaborate on the same host and share the same resources, such as networks, storage systems, and node storage. Each pod gets a private IP address shared by all containers in the pod group. This is not entirely the case at that time-every container running in the same pod has the same host name, so they can be defined as a unit.
When a pod is expanded, all containers in it are expanded into a group. This design makes up for the difference between virtualized applications and containerized applications. However, when leaving each container to run a process, we can easily group the container into a group as a unit. Therefore, a pod is a new virtual machine in the case of micro services and Kubernetes. Even if there is only one container that needs to be configured, it is packaged as a pod.
Pods manages the separation between development and deployment. When the developer pays attention to their code, the operator decides what goes into pod. They assemble the relevant containers and then stitch them up through the definition of pod. This leads to ultimate portability because the container is not specially packaged here. Simply put here, a pod is a list of keys managed by multiple container images.
If Kubernetes is a new operating system, then a pod is a new process. As they become more popular, we will see developers and operators convert pod key lists into multiple container images. Helm, the manufacturer of Deis, is an example of a service used in the Kubernetes pods market.
Service: easily discoverable endpoints
An important difference between holistic services and micro-services is the way in which correlations are discovered. As a whole, it may refer to a dedicated IP address or an DNS entry, which the microservice has to find out before it is called. Because the container and pods may be moved to any node. Every time a container or a pod is revived, it gets a new IP address. This makes it quite difficult to track endpoints. Developers have to query services at the back end of discovery, such as etcd,Consul,ZooKeeper or Sky DNS. This requires code-level changes to make the application run correctly.
Kubernetes's built-in service discovery capabilities are excellent. The Services in Kubernetes is a well-defined endpoint for pods. These endpoints remain the same, even when pods is forced to migrate to other nodes, or when it is resurrected.
Multiple pods running on multiple nodes in a cluster will be exposed as a service. This is the basic building block of micro-services. The Service key inventory has the correct tags and selectors to define and group multiple pods running as microservices.
For example, all the Apache web server pods runs on any node in the cluster, the cluster matches the "frontend" node, and the web server becomes part of the service. This leads to a layer of abstraction of multiple pods running under an endpoint on the cluster. The service has a combination of IP address and port and, of course, a name. Users can point to service based on the IP address or the name of the service. This capability makes it flexible to migrate legacy applications to containers.
If multiple containers share the same endpoint, how can they accept communication evenly? This is where the load balancing performance service flows in. This feature is a key difference between Kubernetes and other COE. Kubernetes has a lightweight internal load balancer that routes traffic to all pods participating services.
Service can be exposed in three ways: internal, external, and load balancing.
Internal: certain services, such as databases and cache endpoints, do not need to be exposed. They are only used in applications by other internal pods. These services are exposed through an IP address that is accessible only in the cluster, but not to the outside world. Kubernetes hides sensitive services by exposing an endpoint that is available for internal dependencies. This feature creates an extra layer of security by hiding private pods.
External: Service runs a web server or publicly accessible pods, which are exposed through an external endpoint. These endpoints are available on each node through specific ports.
Load balancer: in a scenario where the cloud provider provides an external load balancer, the service can be connected there. For example, pods may receive traffic through a resilient load balancer (ELB), or through Google GCE's HTTP load balancer. This feature enables third-party load balancers to be integrated into Kubernetes service.
Kubernetes is responsible for taking over the discovery task and the microservice load balancer. It rescues the developer, operation and maintenance staff stuck in the underlying infrastructure to deal with complex pipelines. Developers can also use standard management of hostnames or environment variables to focus on their code without worrying about additional code (such as registering and discovering services).
Thank you for your reading, the above is the content of "what is the Kubernetes method?" after the study of this article, I believe you have a deeper understanding of what the Kubernetes method has, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.