Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction to the basic knowledge of K8s

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Today, the editor will bring you an article that introduces the basic knowledge of k8s. The editor thinks it is very practical, so I will share it for you as a reference. Let's follow the editor and have a look.

Automatic operation and maintenance

Platform (ansible, git, gitlab, docker, jenkins, elk)

Automation (tools / scripts)

Containerization (docker/k8s)

Virtualization

The difference between docker and virtual machine

More efficient use of resources. The virtual machine is a complete system, and the container is just a process.

The virtual machine starts slowly and the container is seconds.

Consistent operating environment

CI/CD

Container core concept

Mirror Image

(1) Mirror is a special file system. In addition to providing the programs, libraries, resources, configuration and other files required by the container to run, it also contains some configuration parameters (such as environment variables) prepared for the runtime. The mirror does not contain any dynamic data, and its content will not be changed after it is built.

(2) AUFS is used in the mirror image to realize the hierarchical structure. When the image is built, it is built layer by layer, and the former layer is the foundation of the latter layer. After each layer is built, there will be no change, and any change on the latter layer will only happen at your own level. The mirror is read-only. When you create a new image, the underlying content remains the same, but adds a new layer to the image.

Container Container

(1) the container can be thought of as an execution of the image, which is a read-write image. The container is just a process on the operating system, and the process will exit after execution. Don't assume that the container will exist all the time, you should assume that it will crash at any time. Once the container fails, don't hesitate to delete it and start another one.

(2) as required by Docker best practices, the container should not write any data to its storage layer, and the container storage layer should remain stateless. All file writing operations should use data volumes (Volume) or bind host directories. Reads and writes at these locations will skip the container storage layer and read and write directly to the host (or network storage), resulting in higher performance and stability. The life cycle of the data volume is independent of the container, the container dies, and the data volume does not die.

Warehouse Repository

A repository is an image repository, which is responsible for managing Docker images (similar to the warehouse administrator). Each repository can contain multiple Tag, and each tag corresponds to an image. You can also create your own private warehouse.

K8s composition

This cluster mainly consists of two parts:

A Master node (master node): used to control and schedule containers, usually physical servers

A group of Node nodes (compute nodes): workload nodes, which contain specific containers, which can be physical servers or CVMs

Add:

Pod is the most basic operation unit of Kubernetes. A Pod represents a process running in a cluster that encapsulates one or more closely related containers. In addition to Pod, K8S also has the concept of Service, and a Service can be regarded as the external access interface of a group of Pod providing the same service.

K8s architecture diagram

A brief introduction of K8s Workflow Chart

K8s configuration and its simple application

1. Install k8s (node11:192.168.4.11) as master and docker as node node

2. Modify the docker configuration file so that it can use the private image repository on the physical machine

[root@node11 k8s_pkgs] # vim / etc/sysconfig/docker

OPTIONS='--selinux-enabled=false. / / close selinux

[root@node11 k8s_pkgs] # vim / etc/docker/daemon.json

{

"insecure-registries": ["192.168.4.254VR 5000"]

}

#

[root@room9pc01 images] # docker images / / physical machine (192.168.4.254) build a private image repository as follows

REPOSITORY TAG IMAGE ID CREATED SIZE

192.168.8.254:5000/mysql latest 7bb2586065cd 10 weeks ago 476.9 MB

192.168.8.254:5000/pod-infrastructure latest 99965fb98423 19 months ago 208.6 MB

192.168.8.254:5000/guestbook-php-frontend latest 47ee16830e89 2 years ago 510 MB

192.168.8.254:5000/tomcat-app latest a29e200a18e9 2 years ago 358.2 MB

192.168.8.254:5000/redis-master latest 405a0b586f7e 3 years ago 419.1 MB

192.168.8.254:5000/guestbook-redis-slave latest e0c36a1fa372 3 years ago 109.5 MB

one

3. Start related services

K8s-master:

Etcd: is a database that holds the state of the entire cluster

Kube-apiserver: executes instructions, provides a unique entry for resource operations, and provides mechanisms such as authentication, authorization, access control, API registration and discovery

Kube-controller-manager:MT5 actually operates the www.gendan5.com/mt5.html control manager, which is responsible for maintaining the status of the cluster, such as fault detection, automatic extension, rolling updates, etc.

Kube-scheduler: responsible for resource scheduling, dispatching Pod to the corresponding machine according to the predetermined scheduling policy

K8s-node:

Kubelet: the node node (k8s client) that receives the instructions, which is responsible for maintaining the life cycle of the container, as well as the management of Volume (CVI) and network (CNI).

Kube-proxy: proxy node, which is responsible for providing service discovery and load balancing within cluster for Service

Docker: real container service, responsible for image management and real operation of Pod and containers

[root@node11 k8s_pkgs] # rpm-Q docker

Docker-1.13.1-91.git07f3374.el7.centos.x86_64

# components of master

[root@node11 kubernetes] # systemctl start etcd

[root@node11 kubernetes] # systemctl enable etcd

[root@node11 k8s_pkgs] # systemctl start kube-apiserver.service

[root@node11 k8s_pkgs] # systemctl enable kube-apiserver.service

[root@node11 k8s_pkgs] # systemctl start kube-controller-manager.service

[root@node11 k8s_pkgs] # systemctl enable kube-controller-manager.service

[root@node11 k8s_pkgs] # systemctl start kube-scheduler.service

[root@node11 k8s_pkgs] # systemctl enable kube-scheduler.service

# components of node

[root@node11 k8s_pkgs] # systemctl start docker

[root@node11 k8s_pkgs] # systemctl enable docker

[root@node11 k8s_pkgs] # systemctl start kubelet.service

[root@node11 k8s_pkgs] # systemctl enable kubelet.service

[root@node11 k8s_pkgs] # systemctl start kube-proxy.service

[root@node11 k8s_pkgs] # systemctl enable kube-proxy.service

4. Modify the configuration and restart the service

[root@node11 k8s_pkgs] # ls / etc/kubernetes/

Apiserver config controller-manager kubelet proxy scheduler

[root@node11 k8s_pkgs] # vim / etc/kubernetes/apiserver

KUBE_API_ADDRESS= "--insecure-bind-address=0.0.0.0" / / allows any host to access the local computer

Delete ServiceAccount from KUBE_ADMISSION_CONTROL / / delete service account

[root@node11 k8s_pkgs] # systemctl restart kube-apiserver.service

[root@node11 k8s_pkgs] # vim / etc/kubernetes/config

KUBE_MASTER= "--master= http://192.168.4.11:8080" / / specifies the ip address of the master

[root@node11 k8s_pkgs] # vim / etc/kubernetes/kubelet

KUBELET_ADDRESS= "--address=0.0.0.0" / / defines that the client runs on all addresses to facilitate remote management by master

KUBELET_API_SERVER= "--api-servers= http://192.168.4.11:8080" / / client specifies the server address and ip

KUBELET_POD_INFRA_CONTAINER= "--pod-infra-container-image=192.168.4.254:5000/pod-infrastructure" / / specifies the private image repository of pod (container group)

[root@node11 k8s_pkgs] # systemctl restart kubelet

5. Application Test tomcat+mysql

[root@node11 tomcat_mysql] # vim mysql-rc.yaml / create a rc declaration file for mysql

ApiVersion: v1

Kind: ReplicationController / / type declaration

Metadata:

Name: mysql / / rc name

Spec: / / File declaration

Replicas: 1 / / the number of pod that requires the label to be app:mysql is 1

Selector: / / selector to find the pod whose tag is app:mysql

App: mysql

Template: / / if the number of pod is not up to standard, create a pod that meets the following conditions

Metadata:

Labels:

App: mysql

Spec:

Containers:

Name: mysql

Image: 192.168.4.254:5000/mysql / / Private Image Repository

Ports:containerPort: 3306

Env:name: MYSQL_ROOT_PASSWORD

Value: "123456"

# create related resources based on yaml files

[root@node11 tomcat_mysql] # kubectl create-f mysql-rc.yaml / / launch a rc container named mysql

[root@node11 tomcat_mysql] # kubectl get rc / / get container information

NAME DESIRED CURRENT READY AGE

Mysql 1 1 1 1h

[root@node11 tomcat_mysql] # kubectl get pod / / the pod container group contains an infrastructure container and a mysql container

NAME READY STATUS RESTARTS AGE

Mysql-v0pdr 1/1 Running 0 1h

# after you see pod, check its detailed messages

[root@node11 tomcat_mysql] # kubectl describe pod mysql-v0pdr / / View pod information

[root@node11 tomcat_mysql] # docker ps

# create a mysql service

[root@node11 tomcat_mysql] # vim mysql-svc.yaml / / start mysql Container Service script

ApiVersion: v1

Kind: Service

Metadata:

Name: mysql

Spec:

Ports:

Port: 3306

Selector:

App: mysql

[root@node11 tomcat_mysql] # kubectl create-f mysql-svc.yaml / / start the mysql service of the mysql container

[root@node11 tomcat_mysql] # kubectl get services / / View launched services

NAME CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Kubernetes 10.254.0.1 443/TCP 4h

Mysql 10.254.164.241 3306/TCP 7mmysql service does not have EXTERNAL-IP because users do not access mysql directly, but access web service

# create a web service

[root@node11 tomcat_mysql] # vim myweb-rc.yaml / / start the comcat container script

Kind: ReplicationController

Metadata:

Name: myweb

Spec:

Replicas: 5 / / number of copies

Selector:

App: myweb

Template:

Metadata:

Labels:

App: myweb

Spec:

Containers:

Name: myweb

Image: 192.168.4.254:5000/tomcat-app / / Private repository image

Ports:containerPort: 8080

Env:name: MYSQL_SERVICE_HOST / / Connect the mysql container

Value: 'mysql'name: MYSQL_SERVICE_PORT

Value: '3306'

[root@node11 tomcat_mysql] # kubectl create-f myweb-rc.yaml

[root@node11 tomcat_mysql] # kubectl get rc

NAME DESIRED CURRENT READY AGE

Mysql 1 1 1 2h

Myweb 5 5 5 35s

[root@node11 tomcat_mysql] # kubectl get pods

NAME READY STATUS RESTARTS AGE

Mysql-v0pdr 1/1 Running 0 2h

Myweb-jsb1k 1/1 Running 0 47s

Myweb-kl2gk 1/1 Running 0 47s

Myweb-l1qvq 1/1 Running 0 47s

Myweb-p0n7g 1/1 Running 0 47s

Myweb-sh7jl 1/1 Running 0 47s

[root@node11 tomcat_mysql] # vim myweb-svc.yaml / / myweb service script

ApiVersion: v1

Kind: Service

Metadata:

Name: myweb

Spec:

Type: NodePort

Ports:port: 8080 / / the following two lines represent 30001 of access to node, which will map to 8080 of pod

NodePort: 30001

Selector:

App: myweb

[root@node11 tomcat_mysql] # kubectl create-f myweb-svc.yaml / / start the myweb service

[root@node11 tomcat_mysql] # kubectl get service

NAME CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Kubernetes 10.254.0.1 443/TCP 4h

Mysql 10.254.164.241 3306/TCP 30m

Myweb 10.254.222.24 8080:30001/TCP 26s

[root@node11 tomcat_mysql] # docker ps | awk'{print $2}'

ID

192.168.4.254:5000/tomcat-app

192.168.4.254:5000/tomcat-app

192.168.4.254:5000/tomcat-app

192.168.4.254:5000/tomcat-app

192.168.4.254:5000/tomcat-app

192.168.4.254:5000/pod-infrastructure / / myweb infrastructure container

192.168.4.254:5000/pod-infrastructure

192.168.4.254:5000/pod-infrastructure

192.168.4.254:5000/pod-infrastructure

192.168.4.254:5000/pod-infrastructure

192.168.4.254:5000/mysql

192.168.4.254:5000/pod-infrastructure / / mysql infrastructure container

# myweb will launch 5 pod and 10 containers, because each pod needs an infrastructure container and a working container. Similarly, mysql will launch a pod and a container.

Browsers access http://192.168.4.11:30001/

After deleting a container, K8s will find that the number of containers is less than that declared by rc, and it will automatically create a new corresponding container.

[root@node11 tomcat_mysql] # docker rm-f c1bf3822f088

[root@node11 tomcat_mysql] # docker ps

one

two

If you want to adjust the number of pod dynamically, just change the number of rc.

[root@node11 tomcat_mysql] # kubectl scale-- replicas=3 replicationcontroller myweb / / adjust the number of myweb containers to 3

[root@node11 tomcat_mysql] # kubectl get rc

NAME DESIRED CURRENT READY AGE

Mysql 1 1 1 1h

Myweb 3 3 3 36m

[root@node11 tomcat_mysql] # kubectl get pod

NAME READY STATUS RESTARTS AGE

Mysql-hg9z0 1/1 Running 0 58m

Myweb-3g6rz 1/1 Running 0 32m

Myweb-90jts 1/1 Running 0 32m

Myweb-rvrr1 1/1 Running 0 32m

[root@node11 tomcat_mysql] # docker ps-a | the number of awk'{print $2}'/ / docker is also automatically adjusted.

ID

192.168.4.254:5000/tomcat-app

192.168.4.254:5000/tomcat-app

192.168.4.254:5000/tomcat-app

192.168.4.254:5000/pod-infrastructure

192.168.4.254:5000/pod-infrastructure

192.168.4.254:5000/pod-infrastructure

192.168.4.254:5000/mysql

192.168.4.254:5000/pod-infrastructure

Delete services (pod, also known as microservices), rc

[root@node11 tomcat_mysql] # kubectl delete service mysql

Service "mysql" deleted

[root@node11 tomcat_mysql] # kubectl delete service myweb

Service "myweb" deleted

[root@node11 tomcat_mysql] # kubectl delete rc mysql

Replicationcontroller "mysql" deleted

[root@node11 tomcat_mysql] # kubectl delete rc myweb

Replicationcontroller "myweb" deleted

[root@node11 tomcat_mysql] # docker ps / / run without container

Php-redis master and slave

# redis-master

[root@node11 php_redis] # ls

Frontend-controller.yaml redis-master-controller.yaml redis-slave-controller.yaml

Frontend-service.yaml redis-master-service.yaml redis-slave-service.yaml

[root@node11 php_redis] # vim redis-master-controller.yaml

PiVersion: v1

Kind: ReplicationController

Metadata:

Name: redis-master

Labels:

Name: redis-master

Spec:

Replicas: 1

Selector:

Name: redis-master

Template:

Metadata:

Labels:

Name: redis-master

Spec:

Containers:

Name: master

Image: 192.168.4.254:5000/redis-master

Ports:containerPort: 6379

[root@node11 php_redis] # kubectl create-f redis-master-controller.yaml

[root@node11 php_redis] # vim redis-master-service.yaml

ApiVersion: v1

Kind: Service

Metadata:

Name: redis-master

Labels:

Name: redis-master

Spec:

Ports:port: 6379

TargetPort: 6379

Selector:

Name: redis-master

[root@node11 php_redis] # kubectl create-f redis-master-service.yaml

# redis-slave

[root@node11 php_redis] # vim redis-slave-controller.yaml

ApiVersion: v1

Kind: ReplicationController

Metadata:

Name: redis-slave

Labels:

Name: redis-slave

Spec:

Replicas: 2

Selector:

Name: redis-slave

Template:

Metadata:

Labels:

Name: redis-slave

Spec:

Containers:

Name: slave

Image: 192.168.4.254:5000/guestbook-redis-slave

Env:name: GET_HOSTS_FORM

Value: env

Ports:containerPort: 6379

[root@node11 php_redis] # vim redis-slave-service.yaml

ApiVersion: v1

Kind: Service

Metadata:

Name: redis-slave

Labels:

Name: redis-slave

Spec:

Ports:port: 6379

Selector:

Name: redis-slave

[root@node11 php_redis] # kubectl create-f redis-slave-service.yaml

[root@node11 php_redis] # kubectl get rc

NAME DESIRED CURRENT READY AGE

Redis-master 1 1 1 9m

Redis-slave 2 2 2 1m

[root@node11 php_redis] # kubectl get pod

NAME READY STATUS RESTARTS AGE

Redis-master-cn208 1/1 Running 0 9m

Redis-slave-dctvw 1/1 Running 0 1m

Redis-slave-gxtfs 1/1 Running 0 1m

[root@node11 php_redis] # kubectl get service

NAME CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Kubernetes 10.254.0.1 443/TCP 9h

Redis-master 10.254.174.239 6379/TCP 8m

Redis-slave 10.254.117.213 6379/TCP 42s

# frontend (frontend)

[root@node11 php_redis] # vim frontend-controller.yaml

ApiVersion: v1

Kind: ReplicationController

Metadata:

Name: frontend

Labels:

Name: frontend

Spec:

Replicas: 3

Selector:

Name: frontend

Template:

Metadata:

Labels:

Name: frontend

Spec:

Containers:

Name: frontend

Image: 192.168.4.254:5000/guestbook-php-frontend

Env:name: GET_HOSTS_FROM

Value: env

Ports:containerPort: 80

[root@node11 php_redis] # kubectl create-f frontend-controller.yaml

[root@node11 php_redis] # vim frontend-service.yaml

ApiVersion: v1

Kind: Service

Metadata:

Name: frontend

Labels:

Name: frontend

Spec:

Type: NodePort

Ports:

Port: 80

NodePort: 30001

Selector:

Name: frontend

[root@node11 php_redis] # kubectl create-f frontend-service.yaml

[root@node11 php_redis] # kubectl get rc

NAME DESIRED CURRENT READY AGE

Frontend 3 3 3 2m

Redis-master 1 1 1 14m

Redis-slave 2 2 2 6m

[root@node11 php_redis] # kubectl get pod

NAME READY STATUS RESTARTS AGE

Frontend-3dqhv 1/1 Running 0 2m

Frontend-ct7qw 1/1 Running 0 2m

Frontend-tz3p4 1/1 Running 0 2m

Redis-master-cn208 1/1 Running 0 14m

Redis-slave-dctvw 1/1 Running 0 6m

Redis-slave-gxtfs 1/1 Running 0 6m

[root@node11 php_redis] # kubectl get service

NAME CLUSTER-IP EXTERNAL-IP PORT (S) AGE

Frontend 10.254.210.184 80:30001/TCP 42s

Kubernetes 10.254.0.1 443/TCP 9h

Redis-master 10.254.174.239 6379/TCP 13m

Redis-slave 10.254.117.213 6379/TCP 5m

Browsers visit http://192.168.4.11:30001/ to test the separation of redis master and slave read and write

The above are the details of the basic knowledge of K8s. Have you gained anything after reading it? If you want to know more about it, you are welcome to follow the industry information!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report