In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
I. Overview
(1) MongoDB replication is the process of synchronizing data on multiple servers.
(2) replication provides redundant backup of data, and stores copies of data on multiple servers, which improves the availability of data and ensures the security of data.
(3) replication also allows you to recover data from hardware failures and service interruptions.
Note: all databases hosted on MongoDB Atlas are configured as replica sets. Atlas can easily add and remove replica set members in any region of the preferred cloud provider; replica sets provide redundancy and high availability and are the basis for all production deployments.
1. Redundancy and data availability
Replication provides redundancy and improves data availability. Replication provides a degree of fault tolerance to prevent the loss of a single database server by providing multiple copies of data on different database servers.
In some cases, replication can provide increased read capacity, because clients can send read operations to different servers to maintain data replicas in different data centers, which can increase the data location and availability of distributed applications. You can also maintain other copies for dedicated purposes, such as disaster recovery, reporting, or backup.
2. The principle of MongoDB replication
(1) replication of mongodb requires at least two nodes. One of them is the master node, which is responsible for handling client requests, and the rest are slave nodes, which are responsible for replicating the data on the master node.
(2) the common collocation of each node of mongodb is: one master, one slave, one master and multiple slaves.
(3) the master node records all operations oplog on it, the slave node periodically polls the master node to obtain these operations, and then performs these operations on its own data copy, so as to ensure that the data of the slave node is consistent with that of the master node.
The MongoDB replication structure diagram is as follows:
In the above structure diagram, the client reads data from the master node, and when the client writes data to the master node, the master node interacts with the slave node to ensure the consistency of the data.
Copy set features:
(1) Cluster with N nodes
(2) any node can be used as the primary node
(3) all write operations are on the primary node
(4) automatic failover
(5) automatic recovery
3. Replica set members
(1) primary: the primary server is the only member in the replica set that receives write operations. MongoDB applies the write operation on the primary server and then records the operation on the oplog of the primary server. Secondary members copy this log and apply the action to their dataset.
(2) secondary: the secondary node replicates operations from the primary node to maintain the same dataset. The secondary node may have other configurations with special usage profiles. For example, the assistant can be non-voting or priority 0.
1) priority 0 replica set members
Cannot be the main and cannot trigger the election. In addition, assistive devices with normal accessibility: they maintain copies of datasets, accept reads, and vote in elections.
2) hide replica set members
Hidden members maintain a copy of the primary dataset, but are not visible to client applications. Hidden members are suitable for workloads that have different usage patterns than other members in the replica set. A hidden member must always have a priority of 0 and therefore cannot be a primary member. The db.isMaster () method does not show hidden members. However, hidden members may vote in the election.
Note: in a sharded cluster, mongos does not interact with hidden members.
3) deferred copy set members
The deferred member contains a copy of the replica set dataset. However, the dataset of the deferred member reflects the early or delayed state of the collection.
Note:
Request:
Must be a member with priority 0. Set the priority to 0 to prevent deferred members from becoming primary members.
It's supposed to be hidden members. Always prevent the application from viewing and querying delayed members.
Make it primary when voting, if members[ n] .votes is set to 1.
Behavior:
Deferred members copy and apply operations from the source oplog at the time of delay. When choosing the amount of delay, consider the amount of delay:
Must be equal to or greater than the expected maintenance window duration.
Must be less than the capacity of the oplog.
(3) arbiter: the arbitrator does not have a copy of the dataset and cannot be primary. The replica set may have arbitrators to add votes to the main election. The arbitrator always has the appropriate 1 election vote, so the replica set is allowed to have an uneven number of voting members without the overhead of additional members who copy the data.
Add:
The minimum recommended configuration for a replica set is three member replica sets that contain three data hosting members: one primary member and two secondary members. You can also deploy a three-member replica set with two data bearer members: primary member, secondary member, and arbitrator, but a replica set with at least three data bearer members provides better redundancy.
A replica set can contain up to 50 members, but only 7 voting members.
2. Mongodb cluster (replica set mode)
1. Database environment
Hostnam
Database IP address
Database version
Port
Use
System
SQL_jiangjj
192.168.56.147
Mongodb4.0.3
27017
Primary (Master)
Cenots7.4
Node01
192.168.56.242
Mongodb4.0.3
27017
Secondary (standby)
Centos7.4
Node01
192.168.56.245
Mongodb4.0.3
27017
Arbiter (Arbitration)
Centos7.5
2. Temporarily disable the firewall and seliunx, and then enable the security rules after the test.
3. Download the mongdb package
Official address: https://www.mongodb.com/download-center/v2/community
4. Extract the upload package and create a directory
(1) decompression
# tar-zxvf mongodb-linux-x86_64-rhel70-4.0.3.tgz
# mv mongodb-linux-x86_64-rhel70-4.0.3 mongodb
Similarly, extract and rename on node01 and node02 (abbreviated)
(2) create a directory
[root@SQL_jiangjj] # mkdir-p / home/mongodb/primary
[root@SQL_jiangjj] # mkdir-p / etc/mongodb/
[root@node01] # mkdir-p / home/mongodb/secondary/
[root@node01] # mkdir-p / etc/mongodb/
[root@node02] # mkdir-p / home/mongodb/arbiter/
[root@node02] # mkdir-p / etc/mongodb/
5. Create a new configuration file
Parameter details: https://my.oschina.net/pwd/blog/399374
(1) Master profile
# vim / etc/mongodb/primary.conf
# PRIMARY.CONF
Dbpath=/home/mongodb/primary
Logpath=/home/mongodb/primary.log
Pidfilepath=/home/mongodb/primary.pid
# keyFile=/home/mongodb/mongodb.key / / user authentication files between nodes, content must be consistent, permission 600, only replica set mode is valid
Whether the directoryperdb=true / / database is stored in different directories
Logappend=true / / Log appended storage
The name of replSet=google / / Replica set
Bind_ip=192.168.56.147
Port=27017
# auth=true
OplogSize=100 / / sets the size of oplog (in MB)
Fork=true / / starts to the background and starts in daemon mode
Noprealloc=true
# maxConns=4000
(2) backup node
[root@node01 ~] # cat / etc/mongodb/secondary.conf
(3) Arbitration node
6. Start the service
#. / mongodb/bin/mongod-f / etc/mongodb/primary.conf
[root@node01] #. / mongodb/bin/mongod-f / etc/mongodb/secondary.conf
[root@node02] #. / mongodb/bin/mongod-f / etc/mongodb/arbiter.conf
7. Configure nodes to form clusters
Launch the configuration at any node, where the SQL_jiangjj node is used
Log into the database
[root@SQL_jiangjj] #. / mongodb/bin/mongo 192.168.56.147purl 27017
> use admin
> cfg= {_ id: "google", members: [{_ id:0,host:'192.168.56.147:27017',priority:2}, {_ id:1,host:'192.168.56.242:27017',priority:1}, {_ id:2,host:'192.168.56.245:27017',arbiterOnly:true}]}
# configuration effective command
> rs.initiate (cfg)
Description:
(1) cfg name is optional. As long as it does not conflict with the mongodb parameter, _ id is the Replica Set name, and the primary node with a high priority value in members
(2) arbiterOnly:true must be added to the arbitration point, otherwise the active / standby mode will not take effect.
# check whether it is effective: > rs.status ()
The following words are displayed: "ok": 1
8. Testing
(1) create a new library
> use jiangjj
(2) insert document
> db.jiangjj.insert ({"name": "jiangjj"}, {"titile": "title line"}, {"content": "replica set test"})
View two node data
Note: the service will not start automatically after rebooting the system. If you need to configure it, you can enable it.
III. Replica set user authentication settings
1. Create a verification key file
The function of keyFile file: security authentication between clusters, adding security authentication mechanism KeyFile (if keyfile authentication is enabled, auth authentication is enabled by default. In order to ensure that I can log in later, I have created a user)
# touch .keyFile
# chmod 600 .keyFile
# openssl rand-base64 102 > .keyFile
102: is the file size
Note: before creating a keyFile, you need to stop the mongod service of all master and slave nodes in the replica set, and then create it, otherwise the service may not start.
2. Add two users to the master copy
User01:PRIMARY > use admin
User01:PRIMARY > use jiangjj
3. Update all node configuration files
KeyFile=/home/data/.keyFile
Auth=true
4. Start the replica set and test
Login authentication
Root user
View data
Authenticate jiangjj user
For details of permission configuration, please refer to the official documentation.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.