In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to configure MongoDB services. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
MongoDB is a database based on distributed file storage. Written in C++ language. Designed to provide scalable high-performance data storage solutions for WEB applications.
Introduction
Want to try MongoDB on your laptop? Just execute a command and you will have a lightweight, independent sandbox. You can delete all traces you have done when you are done.
Want to use the same stack application stack copy in multiple environments? Build your own container image so that your development, testing, operations, and support teams use the same environment clones.
Containers are revolutionizing the entire software lifecycle: from the earliest technical experiments and proof of concept to development, testing, deployment, and support.
Orchestration tools are used to manage how to create, upgrade, and make multiple containers highly available. Choreography also controls how containers are connected to build complex applications from multiple micro-service containers.
Rich functionality, simple tools, and powerful API make container and orchestration capabilities the first choice for DevOps teams, integrating them into continuous integration (CI) and continuous delivery (CD) workflows.
This article explores the additional challenges encountered when running and orchestrating MongoDB in a container and shows how to overcome these challenges.
Considerations for MongoDB
There are some additional considerations for running MongoDB using containers and orchestrations:
MongoDB database nodes are stateful. If the container fails and is rescheduled, the data is lost (it can be recovered from other nodes in the replica set, but it takes time), which is not necessary. To solve this problem, you can use features such as the data volume volume abstraction in Kubernetes to map temporary MongoDB data directories in the container to persistent locations so that the data can be persisted during container failures and reorderings. MongoDB database nodes in a replica set must be able to communicate with each other-including after rescheduling. All nodes in the replica set must know the addresses of all their peers, but when rearranging the container, they may restart with different IP addresses. For example, all containers in Kubernetes Pod share an IP address, and when the pod is rearranged, the IP address changes. With Kubernetes, you can handle this by associating the Kubernetes service with each MongoDB node, which uses the Kubernetes DNS service to provide a "hostname" to keep the service unchanged during the reorchestration. Once each individual MongoDB node is running (each in its own container), you must initialize the replica set and add each node to it. This may require some additional processing in addition to the orchestration tool. Specifically, you must use a MongoDB node in the target replica set to execute the rs.initiate and rs.add commands. If the orchestration framework provides automated re-orchestration of containers (such as Kubernetes), this increases the flexibility of MongoDB because it automatically recreates failed replica set members, thus restoring full levels of redundancy without human intervention. It should be noted that while the orchestration framework may monitor the status of the container, it is unlikely to monitor applications running within the container or back up its data. This means that it is important to use powerful monitoring and backup solutions such as MongoDB Cloud Manager included in MongoDB Enterprise Advanced and MongoDB Professional. Consider creating your own image, including your preferred version of MongoDB and MongoDB Automation Agent.
Using Docker and Kubernetes to implement MongoDB replica set
As mentioned in the previous section, distributed databases such as MongoDB need to be deployed with a little care when using orchestration frameworks such as Kubernetes. This section describes how to implement it in detail.
We first create an entire set of MongoDB replicas in a single Kubernetes cluster (usually in a data center, which obviously does not provide geographic redundancy). In fact, it is rarely necessary to change to run across multiple clusters, and these steps are described later.
Each member of the replica set will run as its own pod and provide a service that exposes the IP address and port. This "fixed" IP address is important because external applications and other replica set members can rely on it to remain the same while rearranging the pod.
The following figure illustrates one of the pod and the associated replication controllers and services.
Mongodb service configuration mongodb service configuration
Figure 1:MongoDB replica set members are configured as Kubernetes Pod and as service exposure map 1:MongoDB replica set members are configured as Kubernetes Pod and expose as services step by step to introduce the resources described in this configuration:
Starting from the core, there is a container called mongo-node1. Mongo-node1 contains an image called mongo, which is a publicly available MongoDB container image hosted on Docker Hub. The container exposes port 27107 in the cluster. The data volumes feature of Kubernetes is used to map the / data/db directory in the connector to persistent storage called mongo-persistent-storage1, which in turn is mapped to a disk named mongodb-disk1 created in Google Cloud. This is where MongoDB stores its data so that it can be retained after the container is rearranged. The container is stored in a pod with a tag named mongo-node and an (arbitrary) example named rod. Configure the mongo-node1 replication controller to ensure that a single instance of mongo-node1 pod is always running. The load balancing service named mongo-svc-an opens an IP address and port 27017 to the outside, which is mapped to the same port number of the container. The service uses a selector to match the pod tag to determine the correct pod. The external IP address and port will be used for communication between applications and members of the replica set. Each container also has local IP addresses, but these IP addresses change when the container is moved or restarted, so they are not used for replica sets. The next figure shows the configuration of the second member of the replica set.
Mongodb service configuration mongodb service configuration
Figure 2: the second MongoDB replica set member configuration is Kubernetes Pod figure 2: the configuration of the second MongoDB replica set member configured as Kubernetes Pod90% is the same, except for these changes:
Disk and volume names must be unique, so mongodb-disk2 and mongo-persistent-storage2Pod are assigned a label of instance: jane and name: mongo-node2 so that the new service can be distinguished from the rod Pod shown in figure 1 using selectors. The replication controller is named mongo-rc2, the service is named mongo-svc-b, and a unique external IP address is obtained (in this case, Kubernetes assigned 104.1.4.5) the configuration of the third replica member follows the same pattern, and the following figure shows the complete replica set:
Figure 3: full replica set members configured as Kubernetes services figure 3: configured as full replica set members for Kubernetes services Please note that even if the configuration shown in figure 3 is run on a Kubernetes cluster with three or more nodes, Kubernetes may (and often will) orchestrate two or more MongoDB replica set members on the same host. This is because Kubernetes treats the three pod as belonging to three separate services.
To increase redundancy within the region, you can create an additional headless service. The new service doesn't provide any functionality to the outside world (it won't even have an IP address), but it allows Kubernetes to notify three MongoDB pod to form a service, so Kubernetes tries to orchestrate them on different nodes.
Figure 4: Headless services to avoid the same MongoDB replica set members figure 4: the actual configuration files and commands required to avoid Headless service configuration and start the MongoDB replica set for the same MongoDB replica set members can be found in the white paper "enabling Micro Services: explaining containers and orchestration." In particular, some of the special steps described in this article are needed to combine the three MongoDB instances into a functional, robust replica set.
Multiple availability zone MongoDB replica sets
The replica set created above is risky because everything runs in the same GCE cluster and therefore in the same availability zone availability zone. If there is a major event that takes the availability zone offline, the MongoDB replica set will not be available. If geographic redundancy is required, the three pod should run in three different availability zones or zones.
Surprisingly, in order to create a similar set of replicas split between three regions (three clusters are required), there is little need to change. Each cluster needs its own Kubernetes YAML file, which defines the pod, replication controller, and services for only one member in the replica set. So it's easy to create a cluster, permanent storage, and MongoDB nodes for each area.
Figure 5: replica sets running on multiple available zones figure 5: replica sets running on multiple available zones the next step is to learn more about containers and orchestration-the technologies involved and the business benefits provided-read the white paper "enabling Micro Services: explaining containers and orchestrations." This file provides complete instructions for getting the replica set described in this article and running it on Docker and Kubernetes in Google Container Engine.
Thank you for reading! This is the end of the article on "how to configure MongoDB Services". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.