In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "how to back up Kubernetes and Docker", the content is easy to understand, clear, hope to help you solve doubts, the following let the editor lead you to study and learn "how to back up Kubernetes and Docker" this article.
The user's container infrastructure requires some type of backup. Kubernetes and Docker do not build themselves after a disaster. Users do not need to back up the running status of each container, but they do need to back up the configuration used to run and manage the container.
The following is what users need to back up.
Configuration and required status information
Dockerfile is used to build ytterbium images and all versions of these files
An image created from Dockerfile and used to run each container
Kubernetes etcd and other K8s databases about cluster status
Deployments is used to describe the YAML file for each deployment
Persistent data created or changed by the container
Persistent volume
Database
Dockerfiles
The Docker container runs from an image, and its image is built from Dockerfiles. The correct Docker configuration will first use some kind of repository (such as GitHub) as the version control system for all Dockerfile. Do not use temporary images built from temporary Dockerfile to create temporary containers. All Dockerfile should be stored in the repository, which will allow users to extract the historical version of the Dockerfile if there is a problem with the current version.
Users should also have some kind of repository for the YAML files associated with each K8s deployment, which are text files that can benefit from the version control system.
Then you need to back up these repositories. GitHub is one of the more popular repositories, which provides many ways to back up repositories. There are several scripts that use the provided API to download the current backup of the repository. Users can also use third-party commercial tools to back up GitHub or any repositories that users are using.
If you do not follow the above recommendations and instead run the container against an image that no longer has a Dockerfile, you can use the Docker image history command or a tool such as dfimage to create a Dockerfile from the current image. Put these Dockerfile in the repository and start the backup. But don't get caught up in this situation, always store and back up the Dockerfile and YAML files used to create the environment.
Docker Mirror
The current image used to run the container should also be stored in the repository (of course, if the user is running a Docker image in Kubernetes, he is already doing so). Users can use private repositories (such as the Docker registry) or public repositories (such as Dockerhub). Cloud computing providers can also provide users with private repositories to store images. The contents of the buyback should then be backed up. A simple search such as "Dockerhub backup" can produce an amazing number of choices.
If the user does not have a current image to run the container, you can use the docker commit command to create one. Then, use the Docker mirror history or the tool dfimage to create a Dockerfile from that image.
Kubernetes etcd
The Kubernetes etcd database is very important and should be backed up using the etcdctl snapshot save db command. This creates the file snapshot.db in the current directory. The file should then be backed up to external storage.
If you are using commercial backup software, you can easily trigger the etcdctl snapshot save command before creating a directory backup of snapshot.db. This is one way to integrate backups into a commercial backup environment.
Persistent volume
Containers can access persistent storage in a variety of ways, and persistent storage can be used to store or create data. Traditional Docker volumes are located in a subdirectory of the Docker configuration. The bind mount is just any directory installed in the container on the Docker host (using the bind mount command). The Docker community first chose traditional volumes for a number of reasons, but for backup purposes, traditional volumes and bind installations are essentially the same. Users can also load network file system (NFS) directories or objects from the object storage system into a container as volumes.
The method used to back up persistent volumes will be based on the above options for containers. However, they all have the same problem: if the data is changing, it needs to be addressed to get a consistent backup.
One way is to close any containers that use that particular volume. Although this approach is a bit out of date, it is one of the challenges facing the container world, because the typical method of placing backup agents in containers is not a good choice. Once closed, the volume can be backed up. If it is a traditional Docker volume, you can back it up by mounting it in another container that does not change its data during backup, then creating a tar image of the volume in the bound installation volume, and then backing up using any method used by the backup system.
However, this is really difficult to do in Kubernetes. This is one of the reasons why stateful information is best stored in a database rather than in a file system. This problem needs to be considered when designing the K8s infrastructure.
In addition, if you use a bound installation directory, a NFS installation file system, or an object storage system as a persistent storage system, you can use an excellent method to back up the storage system. This could be a snapshot, followed by replication, or just running commercial backup software on the system. These methods may provide more consistent backups than typical file-level backups of the same volume.
Database
The next backup challenge is that the container uses a database to store its data. These databases need to be backed up in a manner that ensures their integrity. Depending on the database, the above approach may work: close the container that accesses the database, and then back up the directory where its files are stored. However, the downtime required for this approach may not be appropriate.
Another way is to connect directly to the database engine itself and ask it to run a backup of the file, which can then be backed up. If the database runs within a container, you first need to use a binding installation to attach a volume that can be backed up, so its backup can exist outside the container. Then run the commands used by the database, such as mysqldump, to create a backup. Then make sure to use the files created by the backup system.
What if the user doesn't know which containers are using what kind of storage or what kind of database? One solution might be to use the docker ps command to list the running containers, and then use the docker inspect command to display the configuration of each container. There is a section called Mount, which tells the user which volumes to mount and where. Any binding installation will also be specified in the YAML file that the user submits to Kubernetes.
Commercial backup solution
There are a variety of commercial backup solutions that can protect some or all of the above data. The following is a very short summary:
Commvault's virtual server agent can act as a proxy for the backup container and its images.
Cohenity provides data protection for the K8s namespace.
Heptio (now VMware) provides Velero backups designed for K8s.
Contino, Datacore, and Portworx provide storage designed specifically for K8 and containers, and also support backing up this information.
Given the variety of configurations for K8 and Docker, it is difficult to cover everything. But I hope to provide some opportunities for thinking, or it can help users back up something that should but has not been backed up yet.
The above is all the contents of the article "how to back up Kubernetes and Docker". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.