In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "detailed introduction about Kubernetes". In daily operation, I believe many people have doubts about the detailed introduction of Kubernetes. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "detailed introduction to Kubernetes"! Next, please follow the editor to study!
Kubernetes: history
Kubernetes, also known as K8s (k... Eight letters. And s) or kube, is the Greek word for governor, helmsman, or captain. To understand the real sailing situation, large ships are loaded with a large number of real-life containers, and the captain or helmsman is the person in charge of the ship. Therefore, in the context of information technology, Kubernetes is the captain and choreographer of the Docker container.
Kubernetes was originally used internally by Google, and based on Google's 15 years of experience running containers, Kubernetes was made available to the community as an open source project for Google in 2014. Over the past four years, Kubernetes has developed rapidly, download and use an amazing amount, by a large number of small and medium-sized enterprise users for development or production environment, and has become a standard framework for container choreography management recognized by the industry.
The development momentum of Kubernetes
Kubernetes is growing amazingly, with more than 40000 stars on GitHub, more than 60000 commit in 2018, and more pull request and issue than any other project on GitHub. Part of the reason behind its growth is its extraordinary scalability and powerful design patterns, which we will further interpret later. You can learn about the Kubernetes application cases of some large software companies in this link:
Https://kubernetes.io/case-studies/
Services provided by Kubernetes
Let's take a look at what features and features make Kubernetes attract so much attention from the industry.
The core of Kubernetes is a container-centric management environment. It orchestrates the computing, network, and storage infrastructure on behalf of the user's workload. This provides the simplicity of platform as a Service (PaaS) and the flexibility of Infrastructure as a Service (IaaS), as well as portability across infrastructure providers. Kubernetes is more than just an orchestration system. In fact, it eliminates the need for "choreography". The technical definition of "choreography" is "execute a defined workflow": first execute A, then run B, and then run C. With Kubernetes, Kubernetes consists of a set of independent, combinable control processes that continuously push the current state to the desired state. And how you get from A to C doesn't matter here. Moreover, users no longer need centralized control, and the whole system has become easier to use, more powerful, and more scalable.
The basic concept of Kubernetes
To use Kubernetes, you can use Kubernetes API objects to describe the desired state of the cluster: the applications or services you want to run, the container images they use, the number of replicas, the network and disk resources you want to provide, and so on. You can set the desired state by creating objects using Kubernetes API--kubectl (usually through the command line interface). You can also directly use Kubernetes API to interact with the cluster and set or modify the desired state.
After setting the desired state, the Kubernetes control panel matches the current state of the cluster to the desired state. To do this, Kubernetes automates various tasks, such as starting or restarting the container, expanding the number of copies of a given application, and so on.
Basic Kubernetes objects include:-Node-Pod-Service-Volume-Namespace
In addition, Kubernetes contains many high-level abstractions called controller. Controller is built on basic objects and provides other features and convenience. They include:
ReplicaSet
Deployment
StatefulSet
DaemonSet
Job
Below we will introduce these concepts one by one, and then try some hands-on exercises.
Nodal point
The Node is the worker machine in Kubernetes, formerly known as minion. The node can be a virtual machine (VM) or a physical machine (depending on the cluster). Each node contains the services needed to run the pod and is managed by the main component. You can understand nodes this way: nodes are to pod what Hypervisor is to virtual machines.
Pod
Pod is the basic building block of Kubernetes, the smallest and simplest unit of the Kubernetes object model that you create or deploy. A Pod represents a unit of deployment: a single instance of an application in a Kubernetes, which may contain a single container or a small number of containers that are tightly assembled and share resources.
Docker is the most commonly used container runtime in Kubernetes Pod, but Pods also supports other container runtimes.
Pod in a Kubernetes cluster is mainly used in two ways: the first is to run Pod with a single container. The "one-container-per-Pod" mode is the most common Kubernetes use case; in this case, you can think of Pod as a package of a single container, and the Pods is managed by the Kubernetes rather than the container. The second is a Pod that runs multiple containers that need to work together. An Pod may contain an application that consists of multiple tightly coupled containers that need to share resources. These co-existing containers may form a single cohesive service unit-one container is responsible for exposing files from shared volumes, while another separate "sidecar" container is responsible for refreshing or updating these files. The Pod encapsulates these containers and storage resources into a manageable entity.
Pod provides two shared resources for its constituent containers: network and storage.
Network: each Pod is assigned a unique IP address. All containers in an Pod share a network namespace, including IP addresses and network ports. Containers within Pod can use localhost to communicate with each other. When containers within the Pod communicate with entities outside the Pod, they must coordinate how to use shared network resources (such as ports).
Storage: Pod can specify a set of shared storage volumes. All containers in Pod have access to shared volumes, allowing these containers to share data. If you need to restart one of the containers, the volume can also keep the persistent data in the Pod persistent.
Service
Kubernetes Pods are not immutable, they are created and killed and will not be reborn. Even if each Pod has its own IP address, you can't completely expect it to stay the same over time. This raises the question, if one set of Pods (such as the back end) provides functionality for another set of Pods (such as the front end) in the Kubernetes cluster, how can those front end pod maintain reliable communication with the back end pod?
This is where services come into play.
Kubernetes services define a logical set of Pods and policies (sometimes called microservices) to access them, usually determined by Label Selector.
For example, if you have a back-end application with three Pod, those pod are replaceable and the front end doesn't care which backend they use. Although the reality of the composition of the back-end set of Pods may change, the front-end client should not know this and there is no need to track the back-end list itself. This Service abstraction enables this separation.
For those applications that are in the same Kubernetes cluster, Kubernetes provides a simple Endpoints API that updates the API whenever a set of Pods in the service changes. For applications outside the cluster, Kubernetes provides a virtual IP-based bridge that Services can redirect to the back-end Pods.
Volume
Whether the disk file in the container is permanent or not can cause some problems for applications running in the container. First, when the container crashes, it will be restarted by Kubernetes, but the file will be lost because the container always starts in a clean state. Second, when running multiple containers in a pod, you usually need to share files between those containers. Kubernetes Volume is here to solve these two problems.
In essence, a volume is just a directory that may contain some data, and in Pod, it can access the container. How the directory is formed, what the media that supports it, and its contents are determined by the specific type of volume used.
The Kubernetes volume has a clear life cycle, the same as the life cycle of the Pod that created it. All in all, a volume exceeds any container running in Pod and retains data when the container is restarted. Typically, when a Pod no longer exists, the volume no longer exists. Kubernetes supports multiple types of volumes, and Pod can use any number of volumes at the same time.
Namespace
Kubernetes supports multiple virtual clusters supported by the same physical cluster. These virtual clusters are called namespaces.
Namespaces provide a range of names. Within a namespace, each resource name needs to be unique, but there is no such requirement across namespaces.
There is no need to use multiple namespaces just to separate slightly different resources, such as different versions of the same software: tags can be used to distinguish different resources within the same namespace.
ReplicaSet
ReplicaSet ensures that the specified number of pod copies are run at a time. In other words, ReplicaSet ensures that pod or similar pod groups are always available. However, Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates and many other useful features for Pods. Therefore, unless you need to customize the update choreography or do not need an update at all, I recommend that you use Deployments instead of using ReplicaSets directly.
This actually means that you may never need to manipulate the ReplicaSet object directly, but you can use Deployment instead.
Deployment
Deployment controller provides declarative updates for Pods and ReplicaSets.
If you describe the desired state in the Deployment object, Deployment controller will change the actual state at this stage to the desired state at a controlled rate. You can define a Deployments to create a new ReplicaSets, or delete an existing Deployments and use the new Deployments to use all resources.
StatefulSets
StatefulSet is used to manage stateful applications, it manages the deployment and extension of a set of Pods, and provides assurance about the ordering and uniqueness of these Pod.
StatefulSet operates in the same mode as controller. You can define the desired state in the StatefulSet object, and StatefulSet controller makes all necessary updates to get from the current state to the desired state. Similar to Deployment, StatefulSet manages Pods based on the same container specification. Unlike Deployment, StatefulSet retains a sticky identity for each Pods. These Pods are created according to the same specification, but are not interchangeable: each has a permanent identifier that remains the same no matter how it is rescheduled.
DaemonSet
DaemonSet ensures that all (or some) nodes are running copies of Pod. As nodes are added to the cluster, Pod is added. When the node is removed from the cluster, Pods becomes garbage. At this point, deleting a DaemonSet will clean up the Pods it created.
Some typical uses of DaemonSet are:
Run the cluster storage daemon, such as glusterd, ceph, on each node.
Run log collection daemon, such as fluentd or logstash, on each node.
Run a node monitoring daemon, such as Prometheus Node Exporter or collectd, on each node.
Job
Job creates one or more pod and ensures that the specified number of pod is successfully terminated when needed. After the Pods completes successfully, job tracks the successful completion. When the specified number is reached, the task of job itself is completed. Deleting a Job will clear the Pods it created.
A simple example is to create a Job object to run an object Pod reliably. If the first Pod fails or is deleted (for example, due to a node hardware failure or node restart), the Job object starts a new Pod.
Challenges in practice
Now that you know the key objects and concepts in Kubernetes, it's clear that you need a lot of information to play with Kubernetes. When you try to use Kubernetes, you may encounter the following challenges:
How do you deploy consistently across different infrastructures?
How do you implement and manage access control across multiple clusters (and namespaces)?
How to integrate with the central authentication system?
How to partition clusters to use resources more efficiently?
How to manage multi-tenancy, multiple dedicated and shared clusters?
How to create a highly available cluster?
How to enforce security policies across clusters / namespaces?
How to monitor well to ensure that there is sufficient visibility to detect and resolve problems?
How to keep up with the rapid development of Kubernetes?
This is where Rancher can help you. Rancher is a 100% open source container management platform for running Kubernetes in production. With Rancher, you can:
Has an easy-to-use kubernetes configuration and deployment interface
Infrastructure management across multiple clusters and clouds
Automatically deploy the latest version of kubernetes
Workload, RBAC, policy and project management
24x7 enterprise support.
Rancher can be a single point of control, and you can run multiple Kubernetes clusters on multiple infrastructures:
Get started with Rancher and Kubernetes
Now let's see how to easily use the Kubernetes object described earlier with the help of Rancher. First, you need an instance of Rancher. Follow this guide to start a Rancher instance and use it to create a Kubernetes cluster in a few minutes:
Https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/
After starting the cluster, you should see the resources of the Kubernetes cluster in Rancher:
To start with the first Kubernetes object, the node, click Nodes on the top menu. You should be able to make up the Nodes of your Kubernetes cluster as a whole:
There, you can also see the number of pod that have been deployed from the Kubernetes cluster to each node. These pod are used by Kubernetes and Rancher internal systems. Usually you don't have to deal with that.
Let's continue with the example of Pod. To do this, go to the Default project of the Kubernetes cluster, and then go to the Workloads tab. Let's deploy a workload. Click Deploy and set Name and Docker image to nginx, use the default values for everything else, and then click Launch.
Once created, the Workloads tab displays the nginx workload.
If you click on the nginx workload, you will see that Rancher actually creates a Deployment, which is used to manage the ReplicaSet as recommended by Kubernetes, and you will also see the Pod created by that ReplicaSet:
Now you have a Deployment that will ensure that the status we need is displayed correctly in the cluster. Now, click the + sign near Scale to expand the Workload to 3. Once you do this, you should immediately see two more Pods created, plus two more ReplicaSet to scale the event. Using the menu to the right of Pod, try to delete one of the pod and notice how ReplicaSet recreates it to match the desired state.
Your application is now up and running, and has been expanded to 3 instances. The next question is, how do you access it? Here, we will try to use the next Kubernetes object, the service. To expose our nginx workload, we need to edit it first and select Edit from the menu to the right of Workload. You will see the Deploy Workload page and have filled in the details of your nginx workload:
Notice that you now have three pod next to Scalable Deployment, but when you start it, the default value is 1. This is because your expansion work has just been completed.
Now click Add Port and populate the values as follows:
Set the Publish the container port value to 80
Protocol is still TCP
Set the As a value to Layer-4 Load Balancer
Set the On listening port value to 80.
Then click Upgrade confidently! This will create an external load balancer in your cloud provider and direct traffic to the nginx Pods within your Kubernetes cluster. To test this, visit the nginx workload Overview page again, and you should now see the 80/tcp link next to Endpoints:
If you click 80/tcp, it will direct you to the external IP of the load balancer you just created and show you a default nginx page to make sure everything is working as expected.
At this point, you have taken care of most of the Kubernetes objects introduced in the first half of the article. You can play with volumes and namespaces in Rancher, and I'm sure you'll soon learn how to use them more easily and quickly through Rancher. As for StatefulSet, DaemonSet, and Job, they are very similar to Deployments, and you can also create them by selecting Workload type in the Workloads tab.
Conclusion
Let's review what you did in the hands-on exercise above. You have created most of the Kubernetes objects we describe:
1. In the beginning, you created a kubernetes cluster in Rancher
Then you browse the Nodes of the cluster
3. You created a Workload
4. You have seen that a Workload actually creates three separate Kubernetes objects: a Deployment that manages the ReplicaSet while keeping the required number of Pods running as needed
5. After that, you expand your number of Deployment and observe how it changes ReplicaSet and expands the number of Pods accordingly
6. Finally, you create a Service type of Load Balancer type, which is used to balance client requests between Pods.
At this point, the study of "detailed introduction to Kubernetes" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.