Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the knowledge points of Kubernetes?

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "what are the knowledge points of Kubernetes". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what are the knowledge points of Kubernetes"?

K8s object component

Node work node

Responsible for application operation (production environment requires at least three work nodes)

Several Pod can work on a single Node

Run three key services

Kubelet: cluster client on a node, responsible for node management and agent for master communication

Kube Proxy: maintain the network to support the Service services layer

Container Runtime: container runtime interaction

Master management node

Responsible for cluster scheduling

Run critical servic

Kube API Server: provide Kube Api services

Etcd: consistency and availability data storage within the cluster

Kube Scheduler: schedule Pod to run on the appropriate Pod

Kube Controller Manager

Kube DNS

ReplicaSet

Working on the master node, the next generation of Replication Controller

Mainly used by Deployments as the management and scheduling of pod

Pod container group

Created on Node, non-persistent

Contains a set of closely related containers (shared storage volumes, allocated cluster-ip, running information) that can communicate with each other through localhost

The basic management unit of the container application that can accept EndPoint requests

Deployment

Deploy pod container groups and monitor and manage

Part of Controller Manager on master node

Service service

A set of back-end Pods abstracts and provides stable service entrances to achieve internal service discovery + load balancing within the cluster, while also exposing services to the outside.

Will be assigned a cluster-ip (it's a VirtualIP,ping that doesn't work)

Service object type

ClusterIP: by default, the service is only open to the private network IP of the cluster (the traffic is forwarded to the pod by calling the iptables creation rule through kube-proxy, and the ping is not available directly, because there is no actual network device bound)

NodePort: all woker node static ports in the cluster NAT are mapped to ClusterIP services (services can be exposed to the outside world, port range is 30000 to 32767)

LoadBalancer: automatically create L4 LBS nodes on supported cloud vendors and route to NodePort services (externally exposed services)

ExternalName: open services using strings based on the CNAME mechanism

Port category:

TargetPort:Pod open port

The open virtual service port of Port:Service, whose EndPoints is TargetPort of subordinate Pod.

The port of the public network outside the NodePort:service

From simple to complex can be divided into three categories

Stateless service: RS maintains Pod,Service open interface

General stateful service: state preservation through Volume and Persistent Volume

Stateful cluster service

Obtaining stable Storage based on PV/PVC

Obtaining stable Network identity based on Headless Service

Serial number naming rules

Init Container: the container for initialization (there can be more than one, start the main container after sequential execution)

Stateful Set

StatefullSet

Used to manage and deploy stateful applications

DaemonSet

Ensure that there is always a specified pod running on the selected node

Ingress

As the entrance to the public network to access the cluster back-end service, it is a more advanced service exposure model in addition to Service Nodeport and so on.

Functions include: L7 load balancing outside the cluster + service discovery, reverse proxy, SSL truncation, virtual host header access

The service can only be exposed on the standard 80amp port 443. Http access rules can be configured, including: host, path

Resides on the control plane node and does not occupy the host port resources of work node

IngressController

Continuously request kubernetes API to perceive the changes of backend service and pod in real time (traifik does not need this step, it interacts with K8S directly)

Refresh the configuration of load balancer with Ingress rules to realize service discovery

Volume

Implemented in the form of plug-ins with strong expansibility

Volume: cannot be created separately, non-independent resource object

Block Storage

Distributed File System

EmptyDir: empty directory, limited to the Pod lifecycle but beyond the container (disk or memory can be specified, storage limit can be set, similar to docker volume internal declaration)

HostPath: Mount the existing directory of the host, which exists independently of Pod (similar to the external declaration of docker volume)

Single-node storage, based on the local directory of the node where the Pod resides, often used for temporary data storage or container data sharing within the Pod

Storing storage provider across nodes

Persistent Volume: can be created separately, independent resource object

Static creation: manually create PV pools for PVC binding

Dynamic creation: based on Storage Class, the storage system is automatically created according to PVC requirements

Storage driver: you can use the mainstream CephRBD or GlusterFS distributed storage solutions, or you can use the convenient and simple NFS (you can directly use Aliyun's NAS storage service and support NFS protocol).

Retain: keep the scene, K8S does nothing

Delete:K8S deletes PV and the data in it

Recycle:K8S deletes the data in PV, and PV Available again

Bind volumes and Pod,PV from Available state to Bound state through Persistent Volume Claim

After the PV is released, it changes to the Released state and carries out the corresponding recovery strategy.

Create form

Minikube-docker preparation for lightweight K8S Construction Scheme

Docker Daemon

Docker Machine

Install curl-Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 & & chmod + x minikube & & sudo mv minikube / usr/local/bin/curl-Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl-s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl & & chmod + x kubectl & & sudo mv kubectl / usr/local/bin/

The following is written to / init_minikube.sh and given executable permissions

#! / bin/bashexport MINIKUBE_WANTUPDATENOTIFICATION=falseexport MINIKUBE_WANTREPORTERRORPROMPT=falseexport MINIKUBE_HOME=$HOMEexport CHANGE_MINIKUBE_NONE_USER=trueexport KUBECONFIG=$HOME/.kube/configmkdir-p $HOME/.kube & & touch $HOME/.kube/configminikube start-- vm-driver=none # Mount the k8s cluster directly into the host through the none driver mode using sudo su-# none driver mode to install minkube requires the use of a root account to manage operations / init_minikube.shminikube dashboard-- url # opens the k8s management interface minikube on port 30000 by default Service frontend-url # specify the address of the service # plug-in management minikube addons listminikube addons enable plug-in name installation troubleshooting

Minikube logs checks the error log

Check whether images such as gcr.io/google_containers/pause-amd64 are downloaded successfully.

K8S Cluster Management # Management commands (resource types: nodes, pods, deployments, events, service, ing, All) kubectl versionkubectl cluster-info # display cluster information kubectl get resource type [- l label key = label value] [- n namespace /-- all-namespaces] # list worker node kubectl config view # View configuration kubectl describe resource type # resource details kubectl logs resource name # Container print log kubectl label resource type resource name label key = label value # Common command kubectl get pod-o wide/yaml # Check pod node kubectl get services service name-o yaml # check service details kubectl delete pod-- grace-period=0-- force pod name # immediately force deletion of pod# container execution command kubectl exec-ti Pod name [- c container name]-command kubectl exec Pod name [- c container name]-command # configmap management kubectl create configmap configuration name-from-file= configuration file path kubectl get configmap configuration name -o yaml# create deployment deployment an application to Podkubectl create secret docker-registry regcred-- docker-server=-- docker-username=-- docker-password=-- docker-email= # install registry credential method 1:kubectl run deployment name-- image= image address-- port=8080-- number of replicas= copies-- labels= "key=value" method 2:kubectl create | apply-f deployment.yamlkubectl get deploymentskubectl get pods# update application kubectl set image deployments/ deployment name deployment name = image address kubectl rollout Status deployments/ deployment name # View update status # rollback Application kubectl rollout undo deployments/ deployment name # Application scaling kubectl scale deployments/ deployment name-number of replicas= copies # Open pod service to servicekubectl expose deployment/ deployment name-name= service name-type=NodePort | LoadBalancer--port port number kubectl get services # you can view the service open address # offline service kubectl delete service-l label key = kubectl delete deployment-l label key = label value kubectl delete deployment-l label key # prohibit non-worker nodes Schedule to run PODkubectl taint node node name node-role.kubernetes.io/ node name = "": NoSchedule health check

Process level check: check the Docker Daemon service when active

Application level check

HTTP: status code 200and 399 is healthy

Container Exec: execute container command. If the exit code is 0, it is healthy.

TCP Socket: try a socket to connect to the container

Rancher

V1 version supports K8S, Mesos and Swarm,V2 instead of fully supporting unique K8S

Application Market built by Catalog:rancher

Choreography and scheduling framework used by Cattle:rancher itself

Installation

Firewall open port

SSH:22/tcp

RancherServer:8443/tcp 、 8080/tcp

K8S: 6443 (tcp ApiServer), 10250 (tcp KubeletApi), 10251 (tcp Schedule), 10252 (tcp Control), 10255 (tcp Control), 10256 (tcp Kubeproxy), 30000 max 32767 (tcp NodePort)

VXLAN:4789/udp

IPSec:500/udp 、 4500/udp

Etcd:2379/tcp 、 2380/tcp

Canal:80/tcp 、 443/tcp

Flannel:8285/udp 、 8472/udp 、 2375/udp

Enable IPV4 routing forwarding (CentOS 7.4 + does not need to do this)

# / etc/sysctl.conf add a line net.ipv4.ip_forward = 1

Docker preparation

The highest Docker version of v17.03-ce is supported on RancherServer and cluster nodes

Sudo yum install-y-setopt=obsoletes=0 docker-ce-17.03.2.ce-1.el7.centos docker-ce-selinux-17.03.2.ce-1.el7.centos

HTTPS certificate preparation

Docker run-it-- rm-p 443-p 80:80-- name certbot\-v "/ etc/letsencrypt:/etc/letsencrypt"\-v "/ var/lib/letsencrypt:/var/lib/letsencrypt"\ certbot/certbot certonly-n-v-- standalone-- agree-tos-- email=admin@rancher.example.com-d rancher.example.comcd / etc/letsencryptsudo ln-s live/rancher.example.com / fullchain.pem cert.pemsudo ln-s live/rancher.example.com/privkey.pem key.pem

Node machine adjustment

There are requirements for node hostname to create RKE cluster in custom way.

# hostname requirements conform to the following regularities `'[a-z0-9] ([- a-z0-9] * [a-z0-9])? (\. [a-z0-9] ([- a-z0-9] * [a-z0-9])?) * '`sudo hostnamectl set-hostname k8s-worker-1.cluster-asudo hostnamectl status

In order to accelerate the speed of the node and pull the private mirror library, it is necessary to add a local area network ip of the host parsing mirror library on the node machine.

Compose choreography service

Version:'2' services: Rancher: image: rancher/server:preview container_name: rancher hostname: rancher restart: always ports:-'8443 image 8443'-'8080 image 8080 volumes:-/ srv/rancher:/var/lib/rancher-/ etc/letsencrypt:/etc/rancher/ssl entrypoint: rancher-- http-listen-port=8080-- https-listen-port=8443 command:-- acme-domain rancher.example.com

Start the service

Docker pull rancher/server:previewdocker-compose up-d Rancherdocker logs-f rancher # follow up rancher initialization status configuration

Default account password admin:admin

Log in to the system and change the password

Create a cluster

Custom mode, canal network to create clusters

Control and etcd nodes require at least 1 core 2 GB of memory (cluster nodes are offline to troubleshoot machine load)

Configure Registries private image repository

Debug # RancherServer debug docker logs-f rancher#K8sNode debug journalctl-xf-u dockerdocker logs kubelet thank you for reading, above is the content of "what are the Kubernetes knowledge points", after the study of this article, I believe you have a deeper understanding of what Kubernetes knowledge points have, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report