Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment of kubernetes V1.6.4 distributed Cluster and how to balance service load

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Many novices are not very clear about the deployment of kubernetes V1.6.4 distributed cluster and how to carry out service load balancing. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.

1 basic concepts and terms of 1 kubernetes Kubernetes introduction to microservices

In recent years, the term "micro-service" has been often mentioned by the technical circles of IT. In a brief introduction, the micro-service architecture is to split the large-scale software that was originally deployed and run separately into independent and interrelated micro-services. The rise of containerization technology software docker has pushed micro-services to the most exciting part. Docker from stand-alone to cluster is an inevitable process of computing development, but the subsequent distributed container governance is a difficulty. Kubernetes is born for this reason, it is the solution of the docker distributed system, and it fully embraces micro-services, provides multi-instance copy support for micro-services, and its internal Service is embedded in load balancing to provide highly reliable services for other resources.

1.2 two types of nodes for kubernetes

Kubernetes consists of two types of nodes:

1 the master node, that is, the master node, is used to run programs such as the core components of kubernetes, and is the management node

2 the node node, that is, the container running node or the node used for computing, in addition, it will run some services such as the registration components of kubernetes, which is the slave node.

The hardware is generally represented as one master host managing multiple node hosts.

1.3 brief introduction of several resource objects provided by kubernetes

Kubernetes contains a variety of resources, such as Pod,RC,Service,Ingress, as well as the node node mentioned in the previous section, and so on. These resources are small units within kubernetes, which can be added, deleted, queried and modified through kubectl tools, and these resources provide support for the final micro-service operations. Several of these resources are briefly introduced below.

1.3.1 smallest resource unit Pod

Pod is the most important and basic concept of kubernetes, and it is also the smallest unit in kubernetes that can be created, scheduled and managed. It is named Pod, and the English word pod, which means pea pod, also vividly shows the structure and function of the resource.

As shown in the following figure:

Pod is like a pea pod with an adzuki bean of container in it. Among them, pause container is the container,pause that every Pod must contain. Container,pause is used as the root container of Pod, that is, the pause container is automatically created by default when the Pod is created, without the need for users to explicitly create it. It is used to determine the activity of the Pod and record the IP of the Pod, which is shared to other container in this Pod.

The following is an example of an yaml configuration file for Pod:

ApiVersion: v1 kind: Pod metadata: name: php-test labels: name: php-test spec: containers:-name: php-test image: 192.168.174.131:5000/php-base:1.0 ports:-containerPort: 80 hostPort: 801.3.2 replica controller RC

The full name of replica controller RC is Replication Controller. It actually defines an expected scenario, that is, the number of copies of a certain Pod should be run as soon as possible to meet that expectation as soon as possible. Therefore, the general RC configuration file contains the expected number of copies of the Pod, the Label Selector of the filtered Pod, and the image template content that creates the Pod. Here is an example of a simple yaml configuration file for RC:

ApiVersion: v1kind: ReplicationControllermetadata: name: loadbalancetest1spec: replicas: 3 selector: app: loadbalancetest template: metadata: labels: app: loadbalancetest spec: containers:-name: loadbalancetest image: docker.io/webin/loadbalancetest:1.0 ports:-containerPort: 8080 hostPort: 8087

Because RC can automatically create the expected Pod, you can even skip the writing of Pod's yaml configuration file and write RC directly to create Pod.

1.3.3 Micro Service Service

Service is one of the core resources in kubernetes. In fact, Service here is equivalent to what we refer to as micro-services in our micro-service architecture. In fact, the previous Pod and RC are for the micro-service Service.

Here is an example of a simple yaml configuration file for Service:

ApiVersion: v1kind: Servicemetadata: name: loadbalancetestspec: type: NodePort ports:-port: 8080 targetPort: 8080 nodePort: 30001 protocol: TCP name: http selector: app: loadbalancetest

Since Pod,RC serves Service, they must be related. Yes, they are linked through built-in tags. As shown in the following figure:

You can see that Pod,RC,Service is associated together through a tag selector called Lable Selector. The Service here actually has three Pod that provide real services for it. With regard to Label Selector and Label, we can also see from the previous yaml file:

Selector: app: loadbalancetest

And:

Labels: app: loadbalancetest

And

Labels: name: php-test

Wait. Among them, Pod defines its own Label tag, which can be any key-value pair in the form of key:value. Label tags can define more than one Pod. This kind of Label is to let RC and Service find the corresponding Label Selector by defining Pod. This achieves copies of the Pod controlled by RC, which in turn provide real services to Service. Service can also call the Pod in the replica through its own internal load balancer to provide the final service and return it to the caller at a higher level.

Add: Service load balancer is embedded with multiple Pod corresponding to Label Selector. The default is the load balancer of the polling mechanism, and another is the load balancer of the mechanism that gives priority to the Pod that provided services to the client when the client requests it.

However, whether it is the default load balancer or otherwise, these load balancers are built-in load balancers for Service. Due to the creation of Service, Kubernetes gives a virtual ClusterIP, which can only be accessed by internal Kubernetes resources. For example, Pod can access the service, but external systems or clients cannot access the service through this IP. Therefore, if external systems want to skip Service to access Pod, they can try to create their own hardware or software-level load balancer. Generally, the IP+ port of a Pod can correspond to a previous container.

1.3.4 entry Resource Ingress

Since Service needs Kubernetes internal resources to access, and Service rarely comes with embedded load balancers, we want to access Service to provide services. Apart from Pod, are there any other kubernetes resources that can first access Service and then expose the service to the outside world? The answer is yes, one of which is Nginx-based Ingress.

Ingress is added after Kubernetes v1.1. It can forward different URL access requests to different Service at the back end and implement the service routing mechanism of the HTTP layer. Here is an example of a simple yaml configuration file for RC:

ApiVersion: extensions/v1beta1kind: Ingressmetadata: name: myingressspec: rules:-host: myingress.com http: paths:-path: / LoadBalanceTest backend: serviceName: loadbalancetest servicePort: 8080

A route forwarding mechanism is simply configured here, that is, by accessing the URL address of the http://myingress.com/LoadBalanceTest, the Ingress can be forwarded to this Service of the loadbalancetest:8080.

1.4 brief introduction of the core components needed to build a kubernetes distributed system

A kubernetes distributed system consists of a set of executable programs or components, including:

(1) kube-apiserver-API service component, which provides restful-style server interfaces such as adding, deleting, querying and modifying all objects in the kubernetes system.

(2) kube-Controller Manager, the resource controller, is mainly responsible for the management of resources (such as Pod,Node, etc.) in the cluster.

(3) kube-Scheduler, dispatching the pod to the scheduler in the specified node.

(4) kubelet, manage and maintain all containers of a Node host and register the host as a Node node process.

(5) kube-proxy, network service proxy component.

2. Build and deploy Kubernetes distributed cluster instances

This chapter will explain the construction and deployment of kubernetes's distributed cluster by creating an instance that can run on multiple distributed hosts.

2.1 introduction to distributed host clusters

In this case, four hosts (virtual machines) will be prepared, one as a master host and the other three as a node host. So first let's take a look at the components and tools that need to be installed on the master host and the node host respectively:

(1) the middleware of master machine-APIServer,kube-scheduler,kube-controller; auxiliary class is: etcd

(2) node machine-kubelet,kube-proxy, docker; auxiliary middleware has flannel

Because the software related to docker and kubernetes needs to run in Linux systems, this case chooses CentOS7 as the operating system of Master and Node hosts.

The approximate distributed architecture is shown in the following figure:

(figure: schematic diagram of component installation and clustering of Master hosts and Node hosts)

If there are multiple virtual machines established in one machine, there may be no problem of interworking between hosts and virtual machines.

However, if it is multiple hosts in the LAN, or virtual machines established in multiple hosts to simulate the cluster environment, it may need to be set up.

In general, when we set up clusters between Lans, we will encounter such problems:

The main results are as follows: (1) client isolation (client isolation) is set up in the wireless local area network, so that the hosts in the wireless local area network can not be interconnected.

(2) the host in the local area network has set up a firewall.

(3) due to the unreasonable network setting of virtual machines, it is impossible to interconnect virtual machines across hosts.

For these questions, there are better answers on the network, such as using network cable to connect the host network directly, virtual machine network setting bridging mode access and so on.

For the test of our case, we can use a mobile phone WLAN hotspot for multi-host networking. In this way, many problems of network interconnection can be avoided.

Referring to the above "schematic diagram of component installation and cluster of Master hosts and Node hosts", we follow the following general steps to build the network:

(1) set the Master host and 3 Node hosts in the same network segment, for example, in 192.168.43.0 account 24.

(2) set the gateway of the Master host and the 3 Node hosts to the same gateway: 192.168.43.1.

(3) turn off the firewall for both Master and Node hosts.

2.2 Environment construction of master host

We can first go to gitHub to open the source code hosting service website, download related software:

(1) kubernetes: https://github.com/kubernetes/kubernetes

(2) etcd: https://github.com/coreos/etcd

(3) flannel: https://github.com/coreos/flannel

You can click on the "release" column to go to the version selection, and then download, it is recommended to choose a newer version of release.

Use the tar command to extract the package. Refer to the command:

Tar-zxvf kubernetes.tar.gztar-zxvf etcd-v3.2.0-rc.1-linux-amd64.tar.gztar-zxvf flannel-v0.7.1-linux-amd64.tar.gz

Because the new version of kubernetes software package contains only basic stand-alone application components, if you need to deploy into a distributed cluster, you also need to carry out secondary download operations.

Go to the Cluster folder under the directory of kubernetes and execute the get-kube-binaries.sh script. The reference code is as follows:

Cd kubernetes/cluster/./get-kube-binaries.sh

Select "Y" for further download, where VPN assistance may be needed.

When the download is complete, two new files appear: "kubernetes-server-linux-amd64.tar.gz" and "kubernetes-client-linux-amd64.tar.gz". Among them, we focus on the package of server.

The package will be download to the server folder under kubernetes. We decompress it and refer to the command:

Tar-zxvf kubernetes-server-linux-amd64.tar.gz

In this way, you can get a new batch of kubernetes cluster components.

Let's go to the extracted folder and look it up. Reference Code:

Cd kubernetes/server/bin/ls

In this way, some core components of kubernetes can be found here.

Copy these components to the system resources directory for easy execution, refer to the command:

Cp-rp hyperkube / usr/local/bin/ cp-rp kube-apiserver / usr/local/bin/ cp-rp kube-controller-manager / usr/local/bin/ cp-rp kubectl / usr/local/bin/ cp-rp kubelet / usr/local/bin/ cp-rp kube-proxy / usr/local/bin/ cp-rp kubernetes / usr/local/bin/ cp-rp kube-scheduler / usr/local/bin/

For a similar etcd, you need to put the execution file in the system resource, refer to the command:

Cp-rp etcd* / usr/local/bin/

Next, we do the following for the master host.

-- add a default route, refer to the command:

Route add default gw 192.168.43.1

-- go to the firewall and refer to the command:

Systemctl stop firewalld.service systemctl disable firewalld.service

-- launch etcd, specify the url address for listening and the client access address provided to the outside world. Refer to the command:

Etcd-- listen-client-urls' http://192.168.43.199:2379,http://127.0.0.1:2379'-- advertise-client-urls' http://192.168.43.199:2379,http://127.0.0.1:2379'

-- set the key value of the subnet and reserve it for the Node host so that it can read data using flannel. Refer to the command:

Etcdctl set / coreos.com/network/config'{"Network": "10.1.0.0 Universe 16"}'

(screenshot of master host running etcd and setting successfully)

-- start the kubernetes server, set the service address of etcd, bind the security port of the host, and set the virtual IP segment of Service cluster. Refer to the code:

Kube-apiserver-etcd_servers= http://localhost:2379-insecure-bind-address=0.0.0.0-insecure-port=8080-service-cluster-ip-range=10.254.0.0/24

-- start controller manager, specify the address of apiserver, and refer to the command:

Kube-controller-manager-logtostderr=true-master= http://192.168.43.199:8080

-- start kubernetes scheduler, specify the address of apiserver, and refer to the command:

Kube-scheduler-- logtostderr=true-- vault 0-- master= http://192.168.43.199:8080

(kube-apiserver,kube-controller-manager,kube-scheduler successfully started screenshot)

If the setting is successful, you can perform the kubectl operation on the master host. We can use the kubectl command to operate the relevant kubernetes resources. Refer to the command:

Kubectl get nodes

As you can see, because any node host has not been started yet, at this time, the master host queries nodes resources, and there is no node host that has been ready, which can be used by the master host.

2.3 Environment construction of node host

To build the environment of the node host, you need to first install docker. In addition, you need to send the kubelet and kube-proxy,flannel on the Master machine to each node node machine (if you are sure that the version is the same, you can also re-download these components without having to copy them from the Master host).

After doing the above preparatory work, we can node the operation of the host.

Copy the relevant files to the system resources folder to simplify the operation in the future. Refer to the code:

Cp-rp kubelet / usr/local/bin/ cp-rp kube-proxy / usr/local/bin/ cp-rp flanneld / usr/local/bin/ cp-rp mk-docker-opts.sh / usr/local/bin/

-- set the gateway. Refer to the command:

Route add default gw 192.168.43.1

-- go to the firewall and refer to the command:

Systemctl stop firewalld.servicesystemctl disable firewalld.service

-- immediately after starting docker, close it. This is to allow the network card of docker to display, then close docker and wait for flannel to set the new network card of docker. Refer to the command:

Service docker start service docker stop

-- start flannel, specify the address of etcd, and describe the value of etcd prefix used. Refer to the command:

Flanneld-etcd-endpoints= http://192.168.43.199:2379-etcd-prefix=/coreos.com/network

(screenshot of flannel successfully set)

-- replan the network address of docker. Refer to the command:

Mk-docker-opts.sh-isource / run/flannel/subnet.envifconfig docker0 ${FLANNEL_SUBNET}

-- for docker installed by default, such as CentOS7, you need to add the following command for the new docker ENI to take effect. Refer to the command:

Echo DOCKER_NETWORK_OPTIONS=-- bip=$ {FLANNEL_SUBNET} > / etc/sysconfig/docker-networksystemctl daemon-reload

-- restart docker:

Service docker restart

(successfully set the network screenshot of docker)

In this way, docker is planned into the subnet specified by flannel.

-- start kubelet, specify apiserver address, set Node host registration name, set docker startup driver systemd, and complete node registration. Refer to the command:

Kubelet-api_servers= http://192.168.43.199:8080-hostname-override=192.168.43.101-cgroup-driver=systemd

-- start kube-proxy, specify the apiserver address, and refer to the command:

Kube-proxy-- logtostderr=true-- vault 0-- master= http://192.168.43.199:8080

When our three node are successfully set up, you can query the status of the nodes through the Master host. Refer to the command:

Kubectl get nodes

(master gets screenshots of nodes status)

As you can see, the three nodes we set up can be queried on the Master host, and the status is "Ready".

3. Creation of container cloud and service

When our kubernetes distributed cluster environment is set up, we can start to create the related micro-service Service.

Service in this chapter is a web application built on tomcat. When you visit the Service, you will return the IP address and Mac address of the server.

In this example, the Service provides three copies of the Pod to provide the real service through the setting of the RC, so each time the Service is accessed, the load is actually balanced to one of the next three Pod to provide the service.

3.1Founding RC

Here, we will create a RC with three Pod. The reference configuration is as follows:

ApiVersion: v1kind: ReplicationControllermetadata: name: loadbalancetest1spec: replicas: 3 selector: app: loadbalancetest template: metadata: labels: app: loadbalancetest spec: containers:-name: loadbalancetest image: docker.io/webin/loadbalancetest:1.0 ports:-containerPort: 8080 hostPort: 8087

The rough meaning of this configuration is that RC requires the creation of Pods with an expected number of 3, these Pods are created with reference to the mirror "docker.io/webin/loadbalancetest:1.0" and are labeled "app: loadbalancetest". At the same time, RC also specifies the Pods in the cluster with the tag "app: loadbalancetest" through selector, which is managed by it.

In addition, containerPort and hostPort are basically equivalent to the function of the docker run-p 8087 docker run 8080 command, that is, to specify the port mapping inside and outside the container. However, in general, especially when a large number of copies are required, it is not recommended to set hostPort, because it may lead to port conflicts when Pod is assigned and transferred to the same node.

After saving the above configuration to a file, you can refer to the following command to run it:

Kubectl create-f loadbalancetest-rc.yaml

After the establishment is successful, we can check the rc and the copy of the pods created by rc with the following command:

Kubectl get rckubectl get pods

The effect picture is as follows:

You can see that one RC controller of loadbalancetest has created three copies of Pod.

3.2Founding Service

Since the Pod is created immediately after the RC is established, we can skip the manual creation of the Pod and set up the Service directly.

The reference configuration of Service is as follows:

PiVersion: v1kind: Servicemetadata: name: loadbalancetestspec: type: NodePort ports:-port: 8080 targetPort: 8080 nodePort: 30001 protocol: TCP name: http selector: app: loadbalancetest

The Service here specifies a specific label through the selector, meaning that if the tag contained in the Pod is equal to the value in it, it is considered to be the Pod that should serve the Service.

In addition, several port-related configurations appear in the configuration file: targetPort indicates that the Service wants to take the service port of Pod's container, and nodePort means that the Node host can access the container through the IP of the Port and Node host. The port parameter represents the port of the entire Service.

Save the configuration to a file and follow the reference command below:

Kubectl create-f loadbalancetest-svc.yaml

After the Service has been successfully established, we can check the information of Service with the following command:

Kubectl get svc

The effect picture is as follows:

You can see that the microservice Service of loadbalancetest has been established, and a virtual IP:10.254.0.242 has been assigned by kubernetes, and port 8080 has been assigned to this Service. Resources in the cluster, including Pod, can call the service through virtual IP and port. The other port 30001 is the node port that is specially exposed when the Service is created to be accessed by the outside world. You can access the services provided by a selected copy of Pod on a Node in the form of nodeIP+nodePort.

3.3 check the container cloud and micro-service Service

The Service in the previous section is actually a micro-service composed of a distributed container cloud.

Because the previous RC sets up three copies, and all three copies of the Pod contain the tag "app: loadbalancetest" selected by Service's tag selector, the three Pod established by RC are all for that Service. Multi-copy Service can make it more robust, and kubernetes's Service also comes with the effect of load balancing in the case of multiple copies, which is also a manifestation of micro-service.

We can check out the details of the microservice Service in more detail with the following command. Refer to the command:

Kubectl describe svc loadbalancetest

The effect picture is as follows:

As you can see, there are three Endpoint: 10.1.67.2 Endpoint 8080, 10.1.70.2 8080, 10.1.88.2

In fact, these are made up of the IP addresses of the three copies of Pod created by RC and port 8080 of the Container of the service.

That is, in fact, loadbalancetest, a micro-service Service, is finally provided by these three Pod after built-in load balancing.

Now, we can try to access the container that serves Service in two ways:

(1) Service access can be provided through node host IP + nodePort.

Up to now, loadbalancetest's micro service Service has been established, but since Service belongs to the internal resource of Kubernetes, it can only be accessed by the inside of Kubernetes, that is, it cannot be accessed directly from outside, so to test whether the Service is really load balancer, you can use the Nginx Ingress in the next section to proxy for internal and external access to kubernetes resources.

4. Use Nginx-Ingress for load balancing test of Service

This section will verify the load balancer of Service by creating an Ingress to enable outsiders to access Kubernetes micro-service Service through URL address.

4.1 create default-http-backend as the default backend for Ingress

Default-http-backend is the default backend of Nginx-ingress-controlloer, which detects the health of Nginx and returns 404 information when the url accessed by the user is incorrect. The configuration referenced is as follows:

ApiVersion: v1kind: ReplicationControllermetadata: name: default-http-backendspec: replicas: 1 selector: app: default-http-backend template: metadata: labels: app: default-http-backendspec: terminationGracePeriodSeconds: 60 containers:-name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a / healthz endpoint Image: webin/defaultbackend:1.0 livenessProbe: httpGet: path: / healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 10 ports:-containerPort: 8080 hostPort: 8085 resources: limits: cpu: 10m memory: 20Mi Requests: cpu: 10m memory: 20Mi

Save the configuration to a file and run it using the following reference command:

Kubectl create-f default-http-backend-rc.yaml

Once the default backend RC is established, let's use the following simple command to create it as Service:

Kubectl expose rc default-http-backend-port=80-target-port=8080-name=default-http-backend

This command is actually equivalent to running a yaml configuration file for Service to create a Service.

The effect picture is as follows:

After the creation, we can test it on the Podip/healthz or serive ip+port directly passed on the Node host of default-http-backend, as shown below:

If you can see the ok returned as shown in the figure above, the deployment is successful.

4.2 create nginx-ingress-controller

Nginx-ingress-controller is actually a Nginx, and the Nginx has the management function of Ingress. The reference configuration is as follows:

ApiVersion: v1kind: ReplicationControllermetadata: name: nginx-ingress-lb labels: name: nginx-ingress-lb namespace: kube-systemspec: replicas: 1 template: metadata: labels: name: nginx-ingress-lb spec: terminationGracePeriodSeconds: 60 hostNetwork: true containers:-image: docker.io/webin/nginx-ingress-controller:0.8 name: nginx-ingress-lb readinessProbe: httpGet: Path: / healthz # defines the url and port for ingress-controller self-test port: 80 scheme: HTTP livenessProbe: httpGet: path: / healthz port: 80 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 3 ports:-containerPort: 80 hostPort: 80 env: -name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name-name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace-name: KUBERNETES_MASTER value: http://192.168.43.199:8080 # visit master to get service Information args:-/ nginx-ingress-controller-default-backend-service=default/default-http-backend

Save it to a file, and then run the RC using the following reference command:

Kubectl create-f nginx-ingress-rc.yaml

At this point, we can check the details of the Pod on which nginx-ingress-controller is running, and then verify that the nginx-ingress-controller is running successfully through the Node machine running nginx-ingress-controller.

Open a browser and enter the ip of the node and add the path / healthz, as shown in the following figure:

If ok is returned, the configuration is successful.

4.3 create an Ingress to route access to a micro-service Service

According to the following Ingress configuration, create a url for outside access and route to a Service:

ApiVersion: extensions/v1beta1kind: Ingressmetadata: name: myingressspec: rules:-host: myingress.com http: paths:-path: / LoadBalanceTest backend: serviceName: loadbalancetest servicePort: 8080

In the machine to initiate the request, go to the new internal domain name ip resolution in / etc/hosts file, and bind the nodeip of nginx-ingress-controller to the domain name of ingress:

192.168.43.102 myingress.com

Where 192.168.43.102 refers to the IP of the node machine running nginx-ingress-controller.

Open the browser and initiate the visit. Refer to the following image:

As you can see, in the access initiated to the same URL address, the IP and hostname returned for each visit are different from those of the previous one, and three values appear in a cycle. This is because the microservice Service corresponding to this URL is operated by three Pod copies, which is also the embodiment of Service internal load balancing.

Summary of simple commands for 5 kubectl

In the previous sections, we completed the construction of the kubernetes distributed cluster, ran the micro-service Service, and verified the built-in load balancing of the micro-service Service through ingress.

In the process, we also used several kubectl commands, all of which are very useful. Here I will make a summary:

5.1 create command for kubectl

Kubectl's create command, which can create resources such as pod,service,ingress, which is generally created in this format:

The yaml configuration file name of the kubectl create-f resource 5.2 kubectl delete command

If some kubernetes resources are not needed, you can use the delete command, which is generally in the following format:

Replace command for kubectl delete resource category resource name 5.3 kubectl

For configuration updates of some resources, you can use the replace command, which is generally in the following format:

Kubectl replace-f resource yaml configuration file name 5.4 kubectl resource acquisition command: get

To get information about a resource, you can use get. The general format is as follows:

Kubectl get Resource Category Resource name 5.5 kubectl Resource details get command: describekubectl describe Resource Category Resource name

In addition, if the namespace is specified in some configuration files, you need to add the-- namespace parameter when querying. If not, the default query and creation are resources under the namespace of default.

Appendix: introduction to the basic operations of docker 1 installation of docker under CentOS7

For CentOS7, if you have a good network connection, you can install it with one command. The reference command is as follows:

Yum-y install docker-io

After the installation is complete, switch to the root user, and then start docker. The reference command is as follows:

Service docker start

2 brief introduction of common commands in docker

There are more than a dozen docker commands, but if they are properly classified, they will be very easy to remember.

2.1 commands for pulling and pushing image in docker

Generally speaking, for the newly installed docker, there is no local images to run, so you need to go to the docker hub image registration center (or other unofficial docker hub and self-built docker hub center) to obtain it.

First of all, we can pull images first. Refer to the command:

Docker pull hello-world

Under the condition of ensuring the smooth flow of the network, the image "hello-world" can be download down. Generally speaking, the docker pull command is followed by an image address, while the images address is generally in the format of "server / library / image name". The above command only writes the image name hello-world, because when the server and library are omitted, docker will automatically go to the official docker hub registry of docker to find it and supplement docker's own server and library name, as shown in the following figure:

If your image is not in the docker hub official registration center, you need to fill in the full path.

Similarly, since there is a pull, there will be a corresponding push command, that is, the docker push command, but this will not be discussed in detail in this section, which will be described in the docker hub registry in section 3.

2.2 commands for local image classes in docker

The common commands for image in docker are: docker images, docker rmi, and docker pull in the previous section. Check the local images and refer to the command:

Docker images

The effect is shown in the following figure:

This lists the local images currently contained under the docker.

To delete an image, refer to the command:

Docker rmi 1815c82652c0

Or

Docker rmi hello-world:latest

Effect picture:

2.3 commands for local container in docker

Container is the running state of image.

The common commands for container in docker are: docker ps, docker run, docker rm, docker stop, docker kill and so on. Refer to the command:

Docker run hello-world

In addition, some software or processes running in container can be mapped from port 80 to external port 8080 through port mapping, such as-p 8080 80, and exposed to the outside world, so that the outside world can use the exposed port to access the process corresponding to the internal port of container. The following is an example of mapping internal port 8080 to external port 8081 while container is running, refer to the code:

Docker run-p 8081 8080 docker.io/tomcat:V8

The effect is shown in the following figure:

You can see that after running the docker command, you can access tomcat on this machine through port 127.0.0.1 + 8081, and this tomcat is actually a service provided by a container of docker. Tomcat provides services internally through port 8080 in this container (the container already contains an image of the linux system). The parameter-p 8081 8080 achieves the function of internal and external port mapping.

In addition, if the local docker images contains images such as centos or ubuntu, you can also run and enter the centos/ubuntu under container through this command:

Docker run-I-t centos / bin/bash

This allows you to access the centos system in container.

Refer to the container in the current docker, such as listing all the container, and refer to the command:

Docker ps-a

To delete a container, refer to the command:

Docker rm 81346941533c

You can also use the following command to delete all container:

Docker rm $(docker ps-a-Q)

Sometimes, when container is running or even provides TCP persistent connection, we can't do docker rm operation casually. At this time, we need to issue docker stop or docker kill command before we can successfully perform docker rm operation.

Refer to the command:

Docker stop 81346941533cdocker kill 81346941533c

Among them, docker stop gracefully stops a container, it will send a signal to container, telling it to be ready and shut down within a certain period of time, while docker kill will kill the process immediately, that is, stop immediately.

3 introduction to docker hub Registration Center

When docker pulls and pushes image, the previous chapter mentions the docker hub registration center, which is used to store docker images and register relevant information such as version number, brief description, etc., to facilitate future image migration or upgrade, reuse and so on.

Docker hub's official website address: https://hub.docker.com/

Docker hub in addition to docker's own official docker hub, there are many other companies built their own docker hub, and even open to the public, such as the current domestic market:

Finch Cloud (https://hub.alauda.cn/)

Cloud per hour (https://hub.tenxcloud.com/)

Alibaba's Aliyun hub Center (https://dev.aliyun.com/search.html)

Wait.

In general, if we create an image ourselves and feel that we will often use it in the future, we can upload it to the relevant docker hub. The steps are:

(1) submit the container or run dockerfile to make it a local image.

(2) log in to the docker hub website or other image registration center, provided that you are already registered with this docker hub center.

(3) to type the local image to be uploaded with tag, the most important thing is to record the image name and version number.

(4) use the docker push command to upload the image to the docker hub.

Refer to the command:

Docker commit 21d2f661627f mytest/hello-world # commits a container and becomes an image snapshot. Docker login # logs in to the docker hub official website; if it is another docker hub center, you need to add the server address later. Docker tag 94000117f92e username/hello-world:v1.0 # tags the image to be uploaded. Docker push username/hello-world:1.0 # uploads the image. 4 use dockerfile to create a mirror

In addition to the remote image of pull and the transformation of container commit into image, the method of creating image in docker can also be created through dockerfile. This creation method can provide a good experience in automatic operation and maintenance. Here's a brief introduction to how to create an image through dockerfile.

First, we need to create a file called "dockerfile" under one of our own test paths, which does not require a suffix. Reference Code:

Touch dockerfile

Write the following code to the file:

FROM jenkins USER rootRUN apt-get update\ & & apt-get install-y sudo\ & & rm-rf / var/lib/apt/lists/*RUN echo "jenkins ALL=NOPASSWD: ALL" >

Enter the following command in the current directory:

Docker build-t myjenkins.

At this time, the docker system will go down step by step according to the definition of dockerfile. The effect picture is as follows:

When you are done, you can see the image through the docker images command.

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report