In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what are the uses and characteristics of Kubernetes". Friends who are interested may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "what are the uses and characteristics of Kubernetes"?
Catalogue
one。 Use of Kubernetes
two。 Characteristics of Kubernetes
three。 Introduce container technology
four。 What can Kubernetes do?
five。 Benefits of using Kubernetes
six。 Understanding Architectur
one。 Use of Kubernetes
Kubernetes is a container cluster management system and an open source platform, which can realize automatic deployment, automatic expansion and maintenance of container cluster.
Rapid deployment of applications
Rapid expansion of applications
Seamless docking of new application functions
Save resources and optimize the use of hardware resources
two。 Characteristics of Kubernetes
Portable: support public cloud, private cloud, hybrid cloud, multiple cloud
Extensible: modular, plug-in, mountable, combinable, supporting various forms of extension
Automation: automatic deployment, automatic restart, automatic replication, automatic scaling / extension, providing powerful self-healing capabilities through declarative syntax
Kubernetes was founded and managed by Google2014 and is the open source version of Google10's large-scale container management technology Borg for many years.
three。 Introduce container technology
Kubernetes uses Linux container technology to provide isolation of applications, such as Docker or rkt
Containers allow you to run multiple services on the same machine, not only providing different environments for each service, but also isolating them from each other. Containers are similar to virtual machines, but with much less overhead.
A container is only a single process that is isolated on the host, consuming only the resources consumed by the application container and will not have the overhead of other processes.
Containers all call the same kernel, so there will be security risks.
Introduction of Container isolation Mechanism
There are two mechanisms available: the first is the Linux namespace, which allows each process to see only its own system view (files, processes, network interfaces, hostnames, etc.). The second is the Linux control group (cgroups), which limits the amount of resources that processes can use (CPU, memory, network bandwidth, etc.)
Docker container mirror layer
The container mirror layer is read-only, and when the container runs, a new writable layer is created above the mirror layer when the process writes to a file at the bottom of the container, a copy of the file is created at the top layer, and the process writes a copy at this time.
Restrictions on the portability of container images
A containerized application compiled on top of a specific hardware architecture can only run on machines with the same hardware architecture
Advantages of container
Agile application creation and deployment: compared with virtual machine images, container images are more container created, improving the efficiency of hardware use
Continuous development, integration and deployment: provide reliable and frequent container image construction and deployment, which can be easily and quickly rolled back (due to image immutability)
Focus on the separation of development and operations: create an application container image at build / release time to separate the application from the infrastructure
Consistency of development, test, and production environments: running on laptops as in the cloud
Cloud and operating system portability: can run on Ubuntu,RHEL,CoreOS, on-premises, Google container engine and anywhere else
Application-centric management: raises the level of abstraction of the operating system to run applications on operating systems that use logical resources
Loosely coupled, distributed, flexible, micro-services: applications are divided into smaller, more independent parts that can be dynamically deployed and managed, rather than giant monolithic applications running on dedicated mainframes
Resource isolation: by isolating applications, you can easily predict application performance
Resource utilization: high efficiency and high density
four。 What can Kubernetes do?
Basically, Kubernetes can schedule and run application containers on physical or virtual machine clusters. However, Kubernetes also allows developers to move away from physical and virtual machines and from host-centric infrastructure to container-centric infrastructure, which provides all the benefits and benefits inherent in containers. Kubernetes provides the infrastructure to build a truly container-centric development environment.
Kubernetes meets many common requirements for running applications in production
Pod provides composite applications and retains a container model for one application and one container
Mount external storage
Secret management
Application of health examination
Examples of application of copies
Lateral automatic expansion and reduction
Service discovery
Load balancing
Scrolling update
Resource monitoring
Log collection and storage
Support self-test and debugging
Authentication and authentication
This provides the simplicity of platform as a service (PAAS) and the flexibility of infrastructure as a service (IAAS) and promotes portability across infrastructure vendors
five。 Benefits of using Kubernetes
Simplify application deployment
Make better use of hardware
Health check and self-repair
Automatic capacity expansion
Simplify application deployment
six。 Understanding Architectur
The Kubernetes cluster is divided into two parts:
Kubernetes control plane
(work) node
Components of the control plane:
Etcd distributed persistent storage
API server
Dispatcher
Controller Manager
These components are used to store and manage cluster state, but they are not containers for running applications
Components running on the worker node:
Kubelet
Kubelet Service Agent (kube-proxy)
When the container is in progress (Docker,rkt or other)
Add-ons:
Kubernetes DNS server
Instrument panel
Ingress controller
Heapster (Container Cluster Monitoring)
Container network interface plug-in
Etcd
All objects created-pod,ReplicationController, services, private credentials, etc.-need to be stored somewhere persistently so that their manifest is not lost when the API server restarts and fails. To do this, Kubernetes uses etcd
Etcd is a responsive, distributed, consistent Key-value storage. Because it is distributed, multiple etcd instances can be run for high availability and better performance
The only one that can communicate directly with etcd is Kubernetes's API server. All other components read and write data to etcd indirectly through the API server. This brings some benefits, one of which is to enhance the optimistic locking system to verify the robustness of the system, and it is easier to replace the actual storage mechanism in the future by removing the actual storage mechanism from other components. It is worth emphasizing that etcd is the only place where Kubernetes stores cluster state and metadata
API server
Kubernetes API server as the central component, other components or clients will call it. The CRUD (Create,Read,Update,Delete) interface which can query and modify the cluster status is provided in the form of RESTful API. He stores the state in etcd
In addition to providing a consistent way to store objects in etcd, the API server also validates these objects so that clients cannot store illegal objects (it is possible to write directly to storage). In addition to verification, optimistic locks are also handled, so that in the case of concurrent updates, changes made to the object will not be overwritten by other clients
One of the clients of the API server is the command line tool kubectl, which also supports listening for resources.
Dispatcher
You don't usually specify which cluster node pod should run on, and the job is left to the scheduler. From a macro point of view, the operation of the scheduler is relatively simple. Is to use the listening mechanism of the API server to wait for the newly created pod, and then assign nodes to each new pod that has no node set.
The scheduler does not command the selected node to run pod. What the scheduler does is update the definition of pod through the API server. The API server then notifies Kubelet that the pod has been scheduled. When Kubelet on the target node finds that the pod is dispatched to this node, he will create and run a container for pod
Although the process of scheduling looks simple macroscopically, the task of selecting the best node for pod is not simple in fact. Of course, the simplest scheduling method is to randomly select a node regardless of the pod that is already running on the node. On the other hand, the scheduler can use advanced technologies, such as machine learning, to predict which type of pod will be scheduled in the next few minutes or hours, and then schedule it with maximum hardware utilization without rescheduling already running pod. Kubernetes' default scheduler implementation is between the simplest and the most complex.
Controller Manager
The API server only does the work of storing resources to etcd and notifying clients of changes. The scheduler only assigns nodes to the pod, so active components are needed to ensure that the real state of the system converges to the desired state defined by the API server. This work is done by the controller in the controller manager.
A single controller, the manager process currently combines multiple controllers that perform different non-conflict tasks. These controllers will eventually be decomposed into different processes, and if necessary, we can replace each of them with a custom implementation.
The controller includes:
Replication Manager (Manager for ReplicationController Resources)
ReplicaSet,DaemonSet and Job controller
Deployment controller
StatefulSet controller
Node controller
Service controller
Endpoints controller
Namespace controller
PersistentVolume controller
Other
Kubelet
Kubelet is the component that is responsible for all the content running on the work node. Its first task is to create a Node resource in the API server to register the node. Then you need to continuously monitor whether the API server assigns the node to pod, and then start the pod container. This is achieved by telling the configured container to run the container from a specific container image when it is in progress, and Kubelet then continuously monitors the running container and reports their status, events and resource consumption to the API server
Kubelet is also a component that runs the container survival probe and restarts the container when the probe reports an error. Finally, when pod is deleted from the API server, Kubelet terminates the container and notifies the server that pod has been terminated
Kube-proxy
Each worker node runs kube-proxy to ensure that clients can connect to the services you define through Kubernetes API
Kube-proxy ensures that the connection to the service IP and port eventually reaches a pod of the supporting service. If there are multiple pod supporting a service, then the agent will play a load balancing role to the pod
Kubernetes plug-in
DNS server
All pod in the cluster are configured by default using DNS servers within the cluster. This makes it easy for pod to query the service by name, or even the IP address of the headless service pod
The DNS service pod is exposed through the kube-dns service so that the pod can move in the cluster like other pod. The IP address of the service is defined in the nameserver of the / etc/reslv.conf file of each container in the cluster. Kube-dns pod uses the monitoring mechanism of the API server to subscribe to changes in Service and Endpoint, as well as changes in DNS records, and its clients can always get the latest DNS information. Objectively speaking, between the time when Service and Endpoint resources change to when DNS pod receives the subscription notification, the DNS record may be invalid
Ingress controller
The Ingress controller runs a reverse proxy server and is configured according to the Ingress,service and Endpoint resources defined in the cluster. So you need to subscribe to these resources, and then update the proxy server configuration every time one of them changes.
Although the definition of an Ingress resource points to a Service,Ingress controller, it directs traffic directly to the service's pod without going through the service IP. When an external client connects through an Ingress controller, the client IP is saved, which makes the controller more popular than Service in some use cases
Other plug-ins need to listen to the cluster status and perform corresponding actions when there are changes.
At this point, I believe you have a deeper understanding of "what are the uses and characteristics of Kubernetes". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.