In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
K8S has been studied for a while, this thing is very easy to use, but there are also many holes, I think a lot of areas need to be improved, K8S static Pod is a sharp weapon, but also must have a comprehensive understanding of it in order to operate.
Today, I will give you an example of a project that I have studied from beginning to end, which requires a certain foundation to attract jade, and we have learned to play it on our own.
The biggest role of K8S is for cluster business-oriented and R & D-oriented CI/CD. My example is biased towards CI/CD.
Get to the point:
Environment:
1, first of all, you must at least have built a single-node K8S platform
2, familiar with the deployment of Dockerfile
3, basic command operation of K8S
4. Have a clear understanding of the basic operation of K8S, otherwise troubleshooting is a big obstacle
Project description:
Before K8S, all kinds of scheduled tasks, such as scheduled backup, scheduled restart, scheduled log retrieval and email, are completed through the crontab of a CentOS virtual machine to execute various shell,python scripts.
But the virtual machine also has the risk of system crash due to power outage and hardware failure, so it is much more robust to make this virtual machine containerized and migrated to K8S.
As long as the mirror is there, and the K8S orchestration script is there, then everything is there, and it runs healthily, and it is very reliable.
Most importantly, I need to externalize all the management scripts to external NAS storage and mount them into the K8S container to achieve data persistence and code centralized management
And, I don't need to log in to the container, I just need to modify the crontab file externally, delete the current pod,K8S and restart a container, load the latest crontab configuration, and skillfully take advantage of the features of K8S to simplify management.
No nonsense, practical information, first of all, everything you need needs to be encapsulated in a centos image:
1. I need to execute the python script in the container, and I need version 3 or above
2. I need shell to execute remote ssh, so I need to install sshpass; (of course you can choose other ways)
3. I need expect tools to avoid interactive scripts, so I need to install expect
4. Because the container does not install crontab by default, how can this be missing if you want to make task planning?
5, the most important thing is to install the supervisor process daemon management tool to avoid the container cycle restart of K8S; (the principle is not explained)
There may be some unnecessary plug-ins. Optimize it yourself. You can achieve the purpose of the experiment here without optimizing the image size.
Paste the Dockerfile source code:
FROM centos:centos7.6.1810MAINTAINER gavin.guoENV TZ "Asia/Shanghai" ENV TERM xtermENV LANG en_US.utf8ADD aliyun-mirror.repo / etc/yum.repos.d/CentOS-Base.repoADD aliyun-epel.repo / etc/yum.repos.d/epel.repoRUN yum install-y zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel & &\ yum install-y curl wget tar bzip2 unzip vim-enhanced passwd Yum-utils hostname net-tools rsync man & &\ yum install-y gcc gcc-c++ git make automake cmake patch logrotate python-devel libpng-devel libjpeg-devel & &\ yum clean allADD Python-3.6.2.tgz / rootRUN cd / root/Python-3.6.2/ & &\ mkdir / usr/local/python3 . / configure-- prefix=/usr/local/python3 & &\ make & & make install & &\ ln-s / usr/local/python3/bin/python3 / usr/bin/python3; ln-s / usr/local/python3/bin/pip3 / usr/bin/pip3 & &\ cd / root; rm-rf Python-3.6.2 & &\ yum-y install net-snmp-utils crontabs sshpass expect Sed-I 's/required pam_loginuid.so/sufficient pam_loginuid.so/' / etc/pam.d/crond & &\ pip3 install supervisor & &\ mkdir-p / etc/supervisor.conf.d & &\ mkdir-p / var/log/supervisorCOPY supervisord.conf / etc/supervisord.confCOPY supervisor_crontab.conf / etc/supervisor.conf.d/crontab.confEXPOSE 22COPY start.sh / root/start.shRUN chmod + x / root/start.shENTRYPOINT ["/ root/start.sh"]
Under Dockefile in the same directory, you need to place the binary source installation package of python3.6 to prevent version drift and speed up construction, and start the script start.sh.
#! / bin/bashcp / root/script/root-f / var/spool/cron//usr/local/python3/bin/supervisord-n-c / etc/supervisord.conf
This small script can not lose one line, missing the first line, then K8S will report an error when starting Pod because the execution path of shell cannot be found.
The second line is used to load the latest crontab file every time you start a new Pod without having to manipulate the pod container
The third line, start supervisord to ensure that pod runs healthily without exiting.
Paste the startup configuration file of crontab's supervisord:
[program:crond] directory=/command=/usr/sbin/crond-nuser=rootautostart=trueautorestart=truestdout_logfile=/var/log/supervisor/% (program_name) s.logstderr _ log _ fileVar _ program_name _ log _ s.log
Use the command to start encapsulating the image:
Docker build-t gavin/mycron:v1.
After the encapsulation is complete, start writing the K8S orchestration file:
Create a mycron.yaml file:
ApiVersion: v1kind: Podmetadata: name: mycronspec: containers:-name: mycron image: gavin/mycron:v1 ports:-containerPort: 22 volumeMounts:-name: script mountPath: / root/script volumes:-name: script hostPath: path: / root/K8S_PV/script type: DirectoryOrCreate
The code is very simple, but we focus on mounting the host disk volume. Although hostPath is used here, / root/K8S_PV/Script is not an ordinary local path, but I mount it to the local NAS shared folder:
Note:
Use the command to mount the configured NAS shared file to the K8S host, and map it to the Pod of the master node in the way of hostPath, which is not mentioned in the official tutorials and authoritative guides. It is not impossible to know by hand, but it is feasible both in theory and in practice.
Talk about operation:
Mount the NAS shared folder:
Mount-o username=XXX,password=XXX / / IP/K8S_PV / root/K8S_PV of the NAS server
K8S allows PV of NFS to be mounted, but when it comes to NAS, there is no official or online information on how to introduce NFS into PV. This is why I say it needs to be improved. Users are not required to set up a separate NFS server.
And for small storage requirements, PV and PVI this set is cumbersome, there is no local direct mount, and then hostPath is simple and rough to use.
After mounting, create a new directory script in K8S_PV, and drop all scripts into it.
Copying the mycron.yaml file to the directory / etc/kubernetes/manifests/, K8S automatically starts static Pod
So let's go into the container and see if everything is all right:
Kubectl exec-it mycron-master / bin/bash runs crontab-e
It is found that the scheduled task configuration has been synchronized smoothly:
And the script path is successfully loaded into pod:
In the future, we need to modify scheduled tasks or scripts, just modify the file on the NAS server, delete Pod, very fast and convenient, and achieve high availability, high reliability, centralized management.
◆★◆★◆ debug ◆★◆★◆
Later, I modified the configuration file of crontab under the root account and deleted the pod in K8S. I thought that something good would happen as expected, but I found that the problem came. Crontab was not updated into the container, and I tried it again and again.
Analysis:
We know that static Pod is a special pod,K8S without direct intervention from kubelet. There are three ways to delete static Pod instances created in K8s (we aim to get new update instances):
1. Kill or remove the yaml orchestration file in / etc/kubernetes/manifests/ directly. K8S will destroy the static Pod created by itself, and then copy the yaml file back, and it will rebuild itself.
2. Delete the following on the web page of K8S-dashboard:
3. Use the command to delete:
At present, the first method is too cumbersome. I take the second and third ways. I find that it has no effect. As long as it is updated successfully in the container of the first way, why?
OK, let's use the docker command to see what happens to the container after each deletion to find out what the problem is:
We found that the second and third methods do not change for the container, because its container creation time and container name have not changed, and your uncle is still your uncle.
When you delete it in the first way, a miracle happens, and what happens happens. We can see that although the container registration name has not changed, the creation time has changed, and it is no longer what it used to be:
◆★◆★◆ thinking about ◆★◆★◆
Why do you have to undo the yaml orchestration file before the container can be completely destroyed? Personally, this should start with the mechanism of K8S. We know that the K8S system abstracts a sandbox concept based on the container: Pod, this pod is the smallest unit we can operate in the K8S system, but in fact the container instance is the smallest particle unit. When we delete pod, it does not mean that the container is deleted. On the contrary, due to the various inspection and recovery and destruction mechanisms of K8S, if the yaml file is not destroyed, then K8S detects that the container is still active and healthy, and will not destroy and replace a new container to you at all, but re-install a pod box, in fact, the container will not even stop and restart. This is why it is so fast to delete pod and start a new pod, because K8S just changes the Pod instead of the container. Don't feel cheated, because K8S doesn't think it's necessary. That's why it thinks it's smart.
After knowing this, we know how to deal with it, and there are several ways to rebuild the container:
Only when the container collapses or becomes unhealthy will the container be destroyed and rebuilt (this is naturally occurring and uncontrollable).
2. Destroy or change the yaml orchestration file, which means that the container can be completely destroyed and rebuilt with the new deployment.
3. Use the docker rm-f container name to forcibly delete the container, and you will get a new container.
4. If you can analyze the K8S source code, it is possible to deceive the health examination mechanism to destroy and rebuild containers, but most people cannot do so.
An experiment, which I think is very classic, has practiced the following contents:
1. Creation of static Pod
2Mount of NAS hostPath storage and ingenious reference of external hostPath storage
3. The characteristics of static Pod are tested.
4Encapsulation of Docker image. The key point lies in the setting of entry script.
5. Understand the relationship and behavior between pod and container
Generally speaking, the last behavior of a Dockerfile is usually:
ENTRYPOINT ["/ usr/local/python3/bin/supervisord", "- n", "- c"/ etc/supervisord.conf"]
This is the supervisord startup process command, but very often it is not so simple. There may be a lot of complex initialization operations when the container starts, so put them all into a start script to execute, and ensure that the supervisord can be executed at the end.
As in this example, I need to update the configuration file in the mount storage path to the appropriate directory before starting the process in order to restart the container and update the configuration.
Have you learned it? Try it, it might be more useful to create your private image and use the startup script to update the container.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.