In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to understand the Docker and Kubernetes in the front-end domain". In the daily operation, I believe many people have doubts about how to understand the front-end domain Docker and Kubernetes. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to understand the Docker and Kubernetes in the front-end domain". Next, please follow the editor to study!
Docker installation
Linux Debian/Ubuntu, install community version of DockerCE
One-click installation of Windows
If it is Windows10, Windows7 will use VirtualBox to install Linux as the host for Docker.
Windows10 Pro will use Hyper-V to install Linux as the host for Docker.
One-click installation of macOS
Docker basic Information
The default Docker storage location is / var/lib/docker, and all images, containers, and volumes will be here. If you use multiple hard drives, or if SSD is not on /, you need to change the default path (graph) to the appropriate location, and the configuration file is / etc/docker/daemon.json, for example
{"bip": "192.168.0.1 bip 16", "graph": "/ mnt/ssd/0/docker"}
During the installation process, Docker will automatically create the docker0 network card and assign ip to him.
The bip specified above is the ip that specifies the docker0 network card. If it is not specified, an appropriate ip will be automatically selected according to the host ip when creating the docker0. However, due to the complexity of the network, especially because of the address selection conflicts in the server room network, you need to manually specify bip as an appropriate value. Docker's ip selection rules are well analyzed in this article. You can refer to https://blog.csdn.net/longxin.... .
After installation and startup, you can view some configuration information of Docker through docker info.
Docker hello world
The first test command for Docker to check to see if the installation is working is simple.
Docker run hello-world
First of all, he will go to Docker Hub to download the hello-world image, and then run the image locally. The launched Docker service is called a container. After the container is created, the specified entry program is executed, the program execution outputs some information to the stream and then exits, and the container ends with the end of the entry program.
View all containers
Docker ps-a
The output is as follows:
Cf9a6bc212f9 hello-world "/ hello" 28 hours ago Exited (0) 3 min
The first column is the container id, which is required for many operations against the container, such as some common operations below.
Docker rm container_id docker stop container_id docker start container_id docker describe container_id
Here is a docker start container_id, which starts a container, indicating that the container can be restarted using docker start even if its resources still exist after exiting. If you want the container to be deleted automatically after exiting, you can specify the-- rm parameter when you docker run.
When we run this command, Docker will download the hello-world image cache locally, so that the next time we run this command, there is no need to download it from the source.
View local image
Docker images
Run Nginx
Nginx, as a widely used Web server, is also popular in the Docker world. It is often used to start a network service to verify the network configuration. Use the following command to start the Nginx container docker run-- rm-p 80:80 nginx.
You can see the start of the Nginx service by accessing the localhost:80 port, and the log output of the Nginx service can be seen in the console.
Because the network within the Docker is isolated from the outside world, we need to manually specify port forwarding-p 80:80 to explicitly forward 80 (front) of the host to port 80 of the container. Exposed port is one of the most commonly used ways for us to provide services. There are also other types of services, such as log processing, where data collection requires shared volumes to provide services, all of which need to be explicitly specified when we start the container.
Some common startup parameters:
-p native port: container port maps local port to container
-P maps container ports to native random ports
-v local path or volume name: the container path mounts the local path or data volume to the specified location of the container
-it starts as an interactive command
-d to run the container in the background
-- clear resources after rm container exits
How does Docker work
The underlying core principle of Docker is to make use of the namespace and cgroup features of the Linux kernel, in which namespace isolates resources and cgroup allocates resources. There are six kinds of namespace in the Linux kernel, which correspond to the following.
Namespace system call functions isolate content UTSCLONE_NEWUTS hosts from domain names IPCCLONE_NEWIPC semaphores, message queues and shared memory PIDCLONE_NEWPID process numbers NetworkCLONE_NEWNET network devices, network stacks, ports and other MountCLONE_NEWNS mount points (file systems) UserCLONE_NEWUSER users and user groups
There are three namespace-related functions in the system call:
Clone http://man7.org/linux/man-pag...
If I want the child process to have a separate network address, the TCP/IP stack, it can be specified as follows.
Clone (cb, * stack, CLONE_NEWNET, 0)
Unshare http://man7.org/linux/man-pag...
Transfer the current process to a new namespace, for example, a process created with fork or vfork will share the parent resource by default, and the child process will be unshared from the parent using unshare.
Setns http://man7.org/linux/man-pag...
Specifies the namespace for the specified PID, which is typically used to share the namespace.
Linux kernel layer supports the isolation of namespace in system calls. By assigning a separate namespace to a process, it can be isolated in each resource dimension. Each process can get its own hostname, IPC, PID, IP, root file system, user group and other information, just like in an exclusive system, but although the resources are isolated, the kernel still shares the same information. This is also one of the reasons why it is lighter than traditional virtual machines.
In addition, resource isolation is not enough. In order to ensure real fault isolation and do not affect each other, it is also necessary to impose restrictions on CPU, memory, GPU and so on, because if a program has a dead loop or memory leak, other programs will not be able to run. Resource allocation is accomplished using the cgroup feature of the kernel. Students who want to know more can
(in addition, kernel running containers above Linux 4.9 are highly recommended. In Linux 3.x, it is known that kernel instability leads to host restart.)
Docker network
If a container wants to provide services, it needs to expose its own network. Docker is isolated from the environment on the host. To expose the service, you need to tell Docker which ports are allowed for external access. When running docker run-p 80:80 nginx, port 80 inside the container is exposed to port 80 of the host. The specific port forwarding will be analyzed below. The network part of the container is the most important part of the container, and it is also the cornerstone of building a large cluster. When we deploy Docker applications, we need to have a basic understanding of the network.
Docker provides four network modes, which are specified for Host, Container, None, and Bridge using-- net.
Host mode:
Docker run-net host nginx
In Host mode, the network namespace is not created for the container alone, and the host network card is directly used inside the container. In this case, the ip obtained in the container is the host ip, and the port binding is directly tied to the host network card. The advantage is that the network transmission does not need to be converted by NAT, so it is more efficient and faster.
Container mode:
Docker run-net container:xxx_containerid nginx
Share network namespace, network configuration, ip address and port with the specified container, where containers with Host network mode cannot be shared.
None mode:
Docker run-net none busybox ifconfig
No network card devices will be assigned in the container specified as None mode, only the internal lo network.
Bridge mode
Docekr run-net bridge busybox ifconfig
This mode is the default mode, and a separate network namespace will be assigned to the container when it starts. At the same time, Docker will create a bridge named docker0 on the host during installation / initialization. The bridge will also serve as the default gateway for the container, and the container network will allocate ip in the gateway IP address range.
When I execute docker run-p 3000 iptable 80 nginx, Docker creates the following iptable forwarding rule on the host.
The bottom rule shows that when an external request is made for port 3000 of the host network card, the destination address is translated (DNAT) to 172.18.0.2 and the port is changed to 80. After the destination address is modified, the traffic is forwarded from the default Nic to the corresponding container via docker0. In this way, when the port 3000 of the host is externally requested, the traffic is forwarded internally to the internal container service, thus exposing the service.
Similarly, the Docker internal access external interface will also perform source address translation (SNAT), the container internal request google.com, the server will receive the host network card's ip.
Due to the extra layer of NAT conversion, Bridge mode is less efficient than Host mode, but it can well isolate the external network environment, so that the container has exclusive ip and complete port space.
The above four network modes are several working modes that come with Docker, but deploying Kubernetes requires all containers to work in a local area network, so the support of multi-host network plug-ins is needed when deploying clusters.
Flannel
Multi-host network solutions include CNI specification issued by CNCF and CNM scheme of Docker, but at present, CNI specification is the most widely used, and one of the implementations is Flannel.
Flannel uses message nesting technology to solve the problem of multi-host network interworking, encapsulates the original packet, specifies the packet ip as the destination host address, and then unpacks and transmits it to the corresponding container when the packet arrives at the host. The following figure shows that flannel uses the more efficient UDP protocol to transmit messages between hosts.
At present, there are three kinds of mainstream cross-host communication, each of which has its advantages and disadvantages, depending on the scenario:
Overlay, that is, the messages above are nested.
Hostgw implements forwarding by modifying the host routing table, which does not need to unpack or unpack packets, so it is more efficient, but it also has many limitations, so it is only suitable for hosts in the same local area network.
BGP (Border Gateway Protocol) implemented by software is used to broadcast routing rules to routers in the network. Like hostgw, it does not need to be unpacked, but it is expensive to implement.
Only with CNI can we build a Kubernetes cluster on this basis.
Kubernetes introduction
It is really convenient to deploy applications with one click using Docker in small-scale scenarios, achieving the goal of one-click deployment. But when there is a need to deploy multiple copies on hundreds of hosts and need to manage the running status of so many hosts and service failures, you need to restart the service on other hosts. Imagine that manual approach is not a desirable solution. At this point, you need to use Kubernetes, a higher-dimensional choreography tool, to manage it. Kubernetes is referred to as K8S. To put it simply, K8S abstracts the hardware resources, abstracting N physical machines or CVMs into a resource pool. The scheduling of containers is handed over to K8S to take care of our containers like their own mothers. If there is not enough CPU, they will be dispatched to a machine with sufficient memory. If the memory does not meet the requirements, a machine with enough memory will be found to create a corresponding container on it, and the service will fail for some reason. K8S will also help us automatically migrate and restart, simply meticulous, the supreme enjoyment. As developers, we only care about our own code, and the health of the application is guaranteed by K8S.
The specific installation method is not described here. If you use Windows or MacOS, you can directly use the Kubernetes option under Docker Desktop to install a single-host cluster, or you can use the kind tool to simulate multi-cluster K8S locally.
The basic unit of K8S scheduling is pod, and a pod represents one or more containers. To quote from a book,
The reason why a container is not used as a scheduling unit is that a single container does not have the concept of a service. For example, a Web application needs a NodeJS and Tomcat to form a complete service, so it is necessary to deploy two containers to implement a complete service. Although they can all be put into one container, this obviously violates the core idea of one container, that is, one process-- "Service Mesh practice-- implementing a service grid with istio soft load".
The difference between K8S and traditional IaaS system:
IaaS is Infrastructure as a service, the so-called infrastructure as a service. Developers who want to launch a new application need to apply for host, ip, domain name and other resources, and then log in to the host to build the required environment and deploy the application online. This is not only not conducive to large-scale operation, but also increases the possibility of errors. Operators or developers often write their own scripts automatically, and then manually modify the scripts when they encounter some differences. Very painful.
K8S programmability of the infrastructure, from the original manual application to a list file automatically created, developers only need to submit a file, K8S will automatically allocate resources for you to create. The CRUD of these facilities can be operated automatically by program.
To understand the basic concepts of K8S, let's deploy a Node SSR application:
Initialize the application template
Npm install create-next-app npx create-next-app next-app cd next-app
After the project is created, add a Dockerfile image to build the service.
Dockerfile
FROM node:8.16.1-slim as build COPY. / / app WORKDIR / app RUN npm install RUN npm run build RUN rm-rf .git FROM node:8.16.1-slim COPY-- from=build / app / EXPOSE 3000 WORKDIR / app CMD ["npm", "start"]
This Dockerfile is optimized in two parts.
Use a stripped-down version of the node basic image to greatly reduce the image volume
By using a step-by-step build method, the number of mirror layers can be reduced and temporary files can be removed, thus reducing the mirror volume.
Build an image
Docker build. -- tag next-app
Then we can make a request for our application to Kubernetes. In order to ensure high availability, the service creates at least two copies, and we also need an application domain name that is automatically forwarded to our service when this domain name is requested on our cluster. Then our corresponding configuration file can be written like this.
Deployment.yaml
ApiVersion: extensions/v1beta1 kind: Ingress metadata: name: app-ingress spec: rules:-host: next-app-server http: paths:-backend: serviceName: app-service servicePort: 80-kind: Service apiVersion: v1 metadata: name: app-service spec: selector: app: web ports:-port: 80 targetPort: 3000 ApiVersion: apps/v1 kind: Deployment metadata: name: app-deployment spec: replicas: 2 selector: matchLabels: app: web template: metadata: labels: app: web spec: containers:-image: next-app name: next-app imagePullPolicy: IfNotPresent ports:-containerPort: 3000
The above list tells K8S:
First of all, I need a Deployment controller, which is mirrored as next-app and the service port is 3000. Create two copies for me.
You also need to create a Service that points to several next-app created by the replica controller.
Apply for an Ingress entry with the domain name next-app-server, which points to the Service just now.
Submit this application to K8S.
Kubectl apply-f. / Deployment.yaml
Then you can see the pod that has been deployed.
Sh-4.4$ kubectl get pod NAME READY STATUS RESTARTS AGE app-deployment-594c48dbdb-4f4cg 1 kubectl get pod NAME READY STATUS RESTARTS AGE app-deployment-594c48dbdb-4f4cg 1 Running 0 1m app-deployment-594c48dbdb-snj54 1 Running 0 1m
Then the browser opens the domain name configured in Ingress to access the corresponding application (as long as the domain name can be dialed to your K8S cluster node).
The above listing mainly creates the three most common resources to ensure the operation of the service, which are also the three main types of resources in Kubernetes.
Ingress
L7 layer load balancer configuration can point to different Service according to different domain name or path information. Ingress and Nginx are very similar. In fact, one implementation of Ingress is Nginx, so you can use Ingress as Nginx. However, we do not need to manually modify nginx.conf or restart Nginx service manually.
Service
A set of abstractions of pod that are used to select pod that provide the same service. Because pod is unstable, destruction and reconstruction often occur, and the ip of pod often changes, an abstract resource, Service, is needed to represent the location of the pod. Service is also the internal service discovery mechanism of K8S and automatically writes the Service name to the internal DNS record.
Deployment
Replica controller, a mechanism for managing and maintaining pod. Through Deployment, you can specify the number of copies, publish strategies, log releases, and support rollbacks.
Application publishing system
K8S is only responsible for container orchestration. In fact, if you deploy an application, you need external Pipeline support. Code construction, static checking, and image packaging are done by Pipeline.
At present, more domestic publishing systems are often composed of the following services: GitLab/GitHub, Jenkins, Sonar, Harbor.
The advantages of K8S in the front end
First of all, the front-end application is different from Java, a small NodeJS service occupies only about 40m of memory, which means that if we have a lot of NodeJS applications, using K8S will save a lot of hardware resources.
Use the idea of container for non-intrusive logging and performance metrics collection.
Since the container is a process, the monitoring of the container can be regarded as the monitoring of our NodeJS process. There are many mature container monitoring schemes in K8S ecology, such as Prometheus + Grafana. The collection of non-intrusive performance indicators that can be achieved by using this includes: network IO / disk IO / CPU / MEM.
Similarly, for log collection, we can directly use console output in the code, and then use the log collection service for log collection in the container dimension, which is also non-intrusive, code-unaware, more developer-friendly, and decouples logs and services.
Front-end micro-service architecture infrastructure layer.
Micro-service architecture is a more and more popular front-end architecture organization in the past two years, and micro-service architecture needs a more flexible deployment. Using Docker allows us to abstract the smallest unit of a service in a complex architecture, K8S makes it possible to automatically maintain large-scale clusters. It can be said that the micro-service architecture is naturally suitable to use K8S.
A new method of K8S, traffic distribution
In K8S, Service is used to abstract a set of pod, and the selector of Service can be changed dynamically, so there are many possible ways to play, such as blue-green publishing system.
Blue and green release means that after passing the release test of a new application during the release process, the release mode of the application is upgraded with one click by switching gateway traffic, and the one-click switch between different versions is realized by dynamically updating the selector of Service in K8S.
Below, use the above Next.js application to demonstrate the blue and green release, warehouse address.
Git clone https://github.com/Qquanwei/test-ab-deploy cd test-ab-deploy docker build. -tag next-app:stable kubectl apply-f. / Deployment.yaml
Here, the next-app:stable image will be deployed to the cluster, and the pod will be marked with the tag of version: stable.
After deployment, it opens and displays as follows.
Next, we deploy the test branch, which we build as a mirror of next-app:test, and label the pod as version: test when deployed.
Git checkout test docker build. -tag next-app:test kubectl apply-f. / Deployment.yaml
At this time, we have deployed a total of two versions of the application, and they are all ready.
However, because our Service is version=stable, all requests will not be made to the test version, but will still request the service of stable.
When we have verified that the test version service is available in other ways, such as using another Service for testing (Good), we can switch the current Service to the test application with the following instruction.
Kubectl apply-f. / switch-to-test.yaml
After executing this command, the refresh page can be seen as follows.
The blue-green release function is easily realized by switching Service, and it is done in an instant, because Service is a relatively lightweight resource in K8S, and it will not restart the entire online service if you change the configuration like the Nginx next door. Of course, the actual production environment will be more rigorous than the demonstration, and there may be a special platform and auditors for secondary verification of each operation.
For blue-green, grayscale release, the use of K8S can be relatively easy to achieve, so that we can have more ways to verify ideas. However, if you want to implement a more advanced traffic allocation scheme (such as the release of Aqqqb B), you need complex traffic management strategies (authentication, authentication), and you need to use the service grid.
Istio is a popular service grid framework at present. Compared with K8S, Istio pays more attention to the traffic transmission of the service grid composed of containers.
The following figure shows the topology and some data metrics of the services in the bookinfo microservice of the official example captured by Istio.
There are two obvious benefits to using Istio:
Istio can capture the call link between services and does not invade user code.
Istio can manage each connection separately.
For example, we can easily assign dynamic weights to v1, v2, and v3 versions of different versions of review applications.
Not only can the traffic weight be assigned, but also some A URL B schemes can be made, such as whether the Header matches the request for different versions of the application, or distinguishing users according to the Cookie of the Header, so as to request different applications. Of course, in the face of different industry scenarios, Istio will also give birth to a lot of interesting games.
However, there are also drawbacks. Istio is actually a very complex system, which will affect performance and take up a lot of system resources.
At this point, the study on "how to understand Docker and Kubernetes in the front-end domain" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.