In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Blog outline:
I. Resource creation
Second, solve the problem that the client cannot access the services run by the internal pod of K8s
Third, build a private repository and customize the image
IV. Expansion and reduction of the version
V. upgrade and rollback of services
I. Resource creation
This blog mainly introduces how to use the command line to create resources!
[root@master ~] # kubectl run test-- image=nginx:latest-- replicas=5// creates a deployment type control group named test based on the image of httpd, and specifies the number of copies as 5 [root@master ~] # kubectl get deployments. / / looking at the deployment controller NAME READY UP-TO-DATE AVAILABLE AGEtest 5amp 555 6m26s//, we can see that the name of deployment is the test [root@master ~] # kubectl get replicasets we specified. / / looking at the replicasets controller NAME DESIRED CURRENT READY AGEtest-66cbf74d74 555 7m50s//, you can see that the NAME of replicasets is appended to the NAME of deployment with a string of ID number [root@master ~] # kubectl get pod-o wide / / to view the details of pod NAME READY STATUS RESTARTS AGE IP NODE. NOMINATED NODE READINESS GATEStest-66cbf74d74-5tcqz 1 node02 test-66cbf74d74-d7wcg 1 Running 0 9m33s 10.244.1.7 node01 test-66cbf74d74- 6975b 1 Running 0 9m33s 10.244.2.3 node02 test-66cbf74d74-d7wcg 1 node02 test-66cbf74d74-d7wcg 1 Running 0 9m33s 10.244.1.6 node01 test-66cbf74d74-d9lj6 1 Running 0 9m33s 10.244.1.5 node01 test-66cbf74d74-r4fmp 1 Running 0 9m33s 10.244.2.2 node02 / / you can see that the NAME of the pod appends another ID to the above replicasets
You can also use the following methods to view the details of the controller! The methods are as follows:
[root@master] # kubectl describe deployments. Test// views details of a controller named test
The returned information is shown in the figure:
[root@master] # kubectl describe replicasets. Test// views details of the replicasets controller
The returned result is shown in the figure:
From the above process of creating a pod, we can see that when we execute the command to create the resource, the deployment controller will manage and create the required pod through the replicaset controller!
Second, solve the problem that the client cannot access the services run by the internal pod of K8s
When the k8s cluster creates the pod, the services provided by the pod can be accessed within the cluster as follows:
[root@master ~] # kubectl get pod-o wide / / View more information about pod NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStest-66cbf74d74-5tcqz 1 node01 test-66cbf74d74 0 28m 10.244.1.7 node01 test-66cbf74d74-6975b 1 Running 0 28m 10.244.2.3 Node02 test-66cbf74d74-d7wcg 1/1 Running 0 28m 10.244.1.6 node01 test-66cbf74d74-d9lj6 1/1 Running 0 28m 10.244.1.5 node01 test-66cbf74d74-r4fmp 1/1 Running 0 28m 10.244.2.2 node02
Test access within the cluster:
There is no problem with internal access to the cluster, but at this time, except inside the cluster, external access is not available, which is very nerve-racking. Fortunately, k8s provides a perfect solution, and the implementation process is as follows:
[root@master ~] # kubectl run web-- image=nginx:latest-- port=80-- replicas=2// creates a resource object named web, and maps port 80 of the container to the host [root@master ~] # kubectl expose deployment web-- name=service-- port=80-- type=NodePort// to create a service (the name can be customized) Map port 80 of the deployment web resource object to [root@master ~] # kubectl get svc service / / View the information of creating service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice NodePort 10.110.139.176 80:31070/TCP 35s// you can see that the deployed service port is mapped to port 31070 of the host
Client access test:
Note: any node in the access cluster can access the services provided by pod in the K8s cluster!
Third, build a private repository and customize the image
To build a private warehouse, please refer to the blog article: Docker build a private warehouse (registry and Harbor)
The warehouse can choose either registry or Harbor, for simplicity. The private registry repository is built here as follows:
[root@master ~] # docker run-tid-- name registry-p 5000 vim / usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd-- restart always registry:latest [root@master ~] # vim / usr/lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd-- insecure-registry 192.168.1.1 Veg5000 [root@master ~] # systemctl daemon-reload [root@master ~] # systemctl restart docker// restart the docker service to make the configuration file effective Node01 、 The operation of node02 is consistent [root@node01 ~] # scp root@master:/usr/lib/systemd/system/docker.service / usr/lib/systemd/system/docker.service [root@node01 ~] # systemctl daemon-reload [root@node01 ~] # systemctl restart docker//node01 operation [root@node02 ~] # scp root@master:/usr/lib/systemd/system/docker.service / usr/lib/systemd/system/docker.service [root@node02 ~] # systemctl daemon-reload [root@ Operation of node02 ~] # systemctl restart docker//node02 # build a private warehouse to complete # [root@master ~] # mkdir v {1Magne2 3} [root@master ~] # cd v1 [root@master v1] # echo-e "FROM nginx:latest\ nADD index.html / usr/share/nginx/html/" > Dockerfile [root@master v1] # echo-e "hello lvzhenjiang:v1" > index.html [root@master v1] # docker build-t 192.168.1.1:5000/nginx:version1. [root@master v1] # cp Dockerfile.. / v2 / [root@master v1] # cp Dockerfile.. / v3 / [root@master V1] # echo-e "hello lvzhenjiang:v2" >.. / v2/index.html [root@master v1] # echo-e "hello lvzhenjiang:v3" >.. / v3/index.html [root@master v1] # cd.. / v2 [root@master v2] # docker build-t 192.168.1.1:5000/nginx:version2. [root@master v2] # cd.. / v3 [root@master v3] # docker build-t 192.168.1.1:5000/nginx:version3. / / generate three different versions of the image Make a distinction on the home page [root@master v3] # docker push 192.168.1.1:5000/nginx:version1 [root@master v3] # docker push 192.168.1.1:5000/nginx:version2 [root@master v3] # docker push 192.168.1.1:5000/nginx:version3// upload the image to the private repository # create a pod for testing # # [root@master v3] # kubectl run nginx-- image=192.168.1.1:5000/nginx:version1-- port=80-- replicas=4// creates pod based on custom image (192.168.1.1:5000/nginx:v1) The number of copies is 4 And map port [root@master v3] # kubectl get pod-o wide | grep nginx | awk'{print $6} '10.244.2.1110.244.2.1010.244.1.1610.244.1.15 IP / create four copies of the IP address [root@master v3] # curl 10.244.2.11hello lvzhenjiang:v1 [root@master v3] # curl 10.244.2.10hello lvzhenjiang:v1// access copy of any IP address can see the same page 4, version expansion, Reduce capacity
The first method:
[root@master v3] # kubectl scale deployment nginx-- replicas=8// uses command line for capacity expansion (the same is true for capacity reduction) [root@master v3] # kubectl get pod-o wide | grep nginx | wc-L8 [root@master v3] # kubectl get deployments. Nginx-o yaml [root@master v3] # kubectl get deployments. Nginx-o json// can also output nginx resource types in json or yaml file format (you can also see the number of copies)
The second method:
[root@master v3] # kubectl edit deployments. Nginx// edits a resource type named nginx 19 spec: / / find the spec field 20 progressDeadlineSeconds: 60021 replicas: 6 replicas / change the number of copies / / take effect immediately after saving and exiting [root@master v3] # kubectl get pod-o wide | grep nginx | wc-L6Compare / check the number of copies V, upgrade and rollback of the service
Service upgrade operation:
[root@master v3] # kubectl set image deployment nginx nginx=192.168.1.1:5000/nginx:version2// upgrade the image of nginx resource to 192.168.1.1:5000/nginx:version2 [root@master v3] # kubectl get pod-o wide | grep nginx | awk'{print $6} '10.244.1.1910.244.1.2110.244.1.2010.244.2.1410.244.2.1510.244.2.16 [root@master v3] # curl 10 .244.1.19 Hello lvzhenjiang:v2// can be verified by test access [root@master v3] # kubectl get deployments. Nginx-o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORnginx 6o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORnginx 6618m nginx 192.168.1.1:5000/nginx:version2 run=nginx// can view [root@master v3] # kubectl describe deployments by viewing the information of nginx resources. Nginx// can also be viewed by viewing the details of nginx resources
The query results are as follows:
[root@master v3] # kubectl set image deployment nginx nginx=192.168.1.1:5000/nginx:version3// upgrade [root@master v3] # kubectl get deployments again. Nginx-o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORnginx 6ram 6 620m nginx 192.168.1.1:5000/nginx:version3 run=nginx// can be seen from the display information that the upgrade has been successful
Service rollback operation:
[root@master v3] # kubectl rollout undo deployment nginx// rollback nginx resources [root@master v3] # kubectl get deployments. Nginx-o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORnginx 6nginx 192.168.1.1:5000/nginx:version2 run=nginx// 64622m nginx 192.168.1.1:5000/nginx:version2 run=nginx// can be viewed from the query results that it has been rolled back to the previous version [root@master v3] # kubectl rollout undo deployment nginx [root@master v3] # kubectl get deployments. Nginx-o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORnginx 5ram 6 553m nginx 192.168.1.1:5000/nginx:version3 run=nginx// performs a rollback operation again and finds that it is back to version 3
From this we can see that the upgrade and rollback operations of the version in the K8s cluster are almost the same as those in docker swarm!
But the rollback operation in k8s can be rolled back to the specified version, which will be described in a later blog post!
-that's all for this article. Thank you for reading-
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.