In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces how to run MongoDB micro service on Docker and Kubernetes, the content is very detailed, interested friends can refer to, hope to be helpful to you.
Want to try running MongoDB on your laptop? Do you hope that by executing a simple command, there will be a lightweight, self-organizing sandboxie? And can you remove all traces with one more command?
Need to run the same application stack in multiple environments? Create your own container image so that the development, test, operation, and support teams start an identical environment.
The container is changing the entire software lifecycle; it ranges from initial technical experimentation to proof of concept through development, testing, deployment, and support.
Read microservices: containers and orchestration white papers (https://www.mongodb.com/collateral/microservices-containers-and-orchestration-explained).
The orchestration tool manages how multiple containers are created, upgraded, and highly available. Choreography also manages how containers are connected and uses multiple micro-service containers to create stable application services.
Rich features, simple tools, and powerful API make containers and choreography popular with the DevOps team. DevOps engineers integrate them into continuous integration (CI) and continuous delivery (CD) workflows.
You will explore the problems encountered when trying to run and orchestrate MongoDB containers and describe how to overcome them.
Thinking about MongoDB
Running MongoDB with containers and orchestrations brings some new ideas:
MongoDB database nodes are stateful. If a container is hung up and rescheduled, data loss is unacceptable (although it can recover data from other nodes, it is time-consuming). To solve this problem, the volume abstraction (Volume abstraction) feature in Kubernetes will be used to map MongoDB data folders to a persistent address to avoid container failures or reorderings.
Communication is required between the same set of MongoDB database backup nodes, even after rescheduling. Nodes in the same redundant backup set must know the addresses of all other nodes, but when a container is rearranged, its IP address changes. For example, all containers in Kubernetes share an IP address, which changes when the pod is rearranged. In Kubernetes, this problem can be solved by contacting the Kubernetes service and the MongoDB node, using Kubernetes's DNS service to provide the hostname to the rescheduled service.
Once each separate MongoDB node (each node is in a separate container) is started, the backup collection must be initialized and each node added. This requires the orchestration tool to provide additional logic. In particular, when there is only one MongoDB node in the backup collection, you must execute the rs.initiate and rs.add commands.
If the orchestration framework provides automatic rescheduling container capabilities (such as the features of Kubernetes), then this can improve the disaster tolerance of MongoDB, nodes will be automatically recreated after hanging up, restored to the level of full redundancy and no human intervention is required.
When the orchestration framework controls the state of all containers, it does not manage applications or backup data within the container. This means that it is important to adopt an effective management and backup scheme, such as MongoDB Cloud Manager, which includes both MongoDB Enterprise Advanced and MongoDB Professional. Considering the need to create an image, you can use your preferred version of MongoDB and MongoDB Automation Agent.
Using Docker and Kubernetes to realize MongoDB redundant backup
As mentioned in the previous section, distributed databases such as MongoDB require additional consideration when deploying using orchestration frameworks such as Kubernetes. This section analyzes the details of this section and describes how to implement it.
First, we create the entire MongoDB redundant collection in a single Kubernetes cluster (there are no physical redundant backups in the same data center). If you create it across multiple data centers, the steps are not much different, which will be described later.
Each member in the backup runs in a separate pod, exposing only its IP address and port. Fixed IP addresses are important for external applications and other redundant backup nodes, determining which pod will be redeployed.
The following figure shows the relationship between one of the pod and the associated redundant controllers and services.
Drill down into the resources described in these configurations as follows:
Start the core node mongo-node1. This node includes a mirror called mongo, derived from Docker Hub (https://hub.docker.com/_/mongo/)), which exposes port 27107.
The volume feature of Kubernetes is used to map the / data/db folder to the persistence directory mongo-persistent-storage1;, which is the directory map mongodb-disk1 created on Google Cloud to persist MongoDB's data.
The container is managed by pod, marked mongo-node, and provides a randomly generated name for rod.
The redundant controller is named mongo-rc1 to ensure that an instance of mongo-node1 is running all the time.
The load balancer service is named mongo-svc-a with 27017 exposed port. The service matches the correct service to the corresponding pod through the pod tag, and the exposed ip and port are used by the application. At the same time, it is used for the communication between the nodes in the redundant backup set. Although each container has an internal ip, they change when the container is restarted or moved, so it cannot be used for communication between redundant backup collections.
The following figure shows the redundant backup and another member information in:
90% of the configuration is the same, with only a few differences:
The name of the hard disk and volume must be * *, so use mongodb-disk2 and mongo-persisitent-storage2
Pod is assigned to the jane instance and the node is named mongo-node2 to distinguish the new service from the Pod in figure 1
The redundancy control is named mongo-rc2
The service is named mongo-svc-b and gets a different external IP address (in this case, Kubernets is assigned as 104.1.4.5)
The configuration of the third redundant backup member is modeled on the above model, and the following figure shows the complete set of redundant configurations:
Note that even if the configuration is as shown in figure 3, on a Kubernetes cluster with three or more nodes, Kubernetes may schedule two or more MongoDB redundant backup members on the same host. This is because Kubernetes treats the three pod as three separate services.
To increase redundancy, you need to create an additional headless service. The service does not have the ability to provide external services, or even an external IP address, but it is used to inform Kubernetes that the three MongoDB Pod belong to the same service, so Kubernetes will schedule them on different nodes.
Specific configuration files and related operation commands can be found in "starting Micro Services: containers & scheduling instructions White Paper". It includes three special steps to ensure that the three MongoDB are merged into one feature, the redundant backup described in this article.
MongoDB redundant sets of multiple availability zones
There is a high risk when all redundant components are running on the same GCE cluster, as well as in the same zone cluster. If a major event causes the available zone to go offline, then the MongoDB redundant collection is not available. If geographically redundant backups are required, the three pod need to be run in different zone.
It takes only a few changes to create such a redundant backup collection. Each cluster needs a separate Kubernetes YAML file to define pod, redundant controllers, and services. You can then complete a zone cluster creation, persistent storage, and MongoDB nodes.
The following figure shows the combination of redundancy running on different zone:
So much for sharing about how to run MongoDB micro services on Docker and Kubernetes. I hope the above can help you and learn more. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.