In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The principle of replication
Replication is based on ah, oh, log oplog, which is equivalent to the binary log in MySQL, recording only the records that have changed. Replication is the process of synchronizing and applying the oplog logs of the master node to other slave nodes.
The principle of election
Node types are divided into standard (host) nodes, passive (passive) nodes and arbitration (arbiter) nodes.
(1) only standard nodes may be elected as primary nodes and have the right to vote. The passive node has a complete copy and cannot become an active node and has the right to vote. Arbitration nodes do not copy data, can not become active nodes, only the right to vote.
(2) the difference between the standard node and the passive node: the high primary value is the standard node, and the low value is the passive node.
(3) the election rule is that the person with the highest number of votes wins, and primary is the value with the priority of 01000, which is equivalent to the number of votes increased by 01000. Election result: the person with the highest number of votes wins; if the number of votes is the same, the newcomer wins.
Experimental deployment
Set up and configure 4 MongoDB instances: mongodb, mongpdb2, mongodb3 and mongodb4. IP address: 192.168.213.184
Since mongodb is the first instance by default after the MongoDB database is installed, it is only necessary to build another 2,3p4 instance.
1. Configure replication set
(1) create data file and log file storage path
[root@localhost ~] # mkdir-p / data/mongodb/mongodb {2Jet 3jue 4} # create a database file 2Jing 3Jet 4
[root@localhost ~] # cd / data/mongodb/
[root@localhost mongodb] # ls
Mongodb2 mongodb3 mongodb4
[root@localhost mongodb] # mkdir logs
[root@localhost mongodb] # ls
Logs mongodb2 mongodb3 mongodb4
[root@localhost mongodb] # cd logs/
[root@localhost logs] # touch mongodb {2J 3J 4} .log # the log storage path in the / data/mongodb/logs/ directory
[root@localhost logs] # chmod-R 777 / data/mongodb/logs/*.log # modify directory permissions
[root@localhost logs] # ls-l
Total dosage 0
-rwxrwxrwx. 1 root root 0 September 13 18:43 mongodb2.log
-rwxrwxrwx. 1 root root 0 September 13 18:43 mongodb3.log
-rwxrwxrwx. 1 root root 0 September 13 18:43 mongodb4.log
(2) configure the configuration file of instance 1 and enable the replication set name
[root@localhost logs] # vim / etc/mongod.conf
Replication:
ReplSetName: abc # uncomment and specify the name of the replication set (the name is randomly defined)
Create a multi-instance profile
[root@localhost logs] # cp-p / etc/mongod.conf / etc/mongod2.conf copy and generate the configuration file for instance 2
[root@localhost logs] # vim / etc/mongod2.conf
SystemLog:
Destination: file
LogAppend: true
Path: / data/mongodb/logs/mongodb2.log # Log file location
Storage:
DbPath: / data/mongodb/mongodb2 # data file location
Journal:
Enabled: true
Port: 27018 # port number. The default port number is 27017, so the port number of instance 2 should be different from it. Instances 3 and 4 are arranged back in turn.
Configure other instances, as modified by instance 2, and pay attention to the port number
[root@localhost logs] # cp-p / etc/mongod2.conf / etc/mongod3.conf
[root@localhost logs] # cp-p / etc/mongod2.conf / etc/mongod4.conf
(3) start the service and check whether the port is open
[root@localhost logs] # mongod-f / etc/mongod.conf-shutdown # first disable the service of instance 1, and then enable it
Killing process with pid: 3997 # process number
[root@localhost logs] # mongod-f / etc/mongod.conf
About to fork child process, waiting until server is ready for connections.
Forked process: 4867
Child process started successfully, parent exiting
[root@localhost logs] # mongod-f / etc/mongod2.conf # start the service of instance 2
About to fork child process, waiting until server is ready for connections.
Forked process: 4899
Child process started successfully, parent exiting
[root@localhost logs] # mongod-f / etc/mongod3.conf # start the service of instance 3
About to fork child process, waiting until server is ready for connections.
Forked process: 4927
Child process started successfully, parent exiting
[root@localhost logs] # mongod-f / etc/mongod4.conf # start the service of instance 4
About to fork child process, waiting until server is ready for connections.
Forked process: 4955
Check to see if the node port is open
[root@localhost logs] # netstat-ntap
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
Tcp 0 0 0.0.0 0 27019 0.0.0 0 V * LISTEN 4927/mongod
Tcp 0 0 0.0.0 0 27020 0.0.0 0 15 * LISTEN 4955/mongod
Tcp 0 0 0.0.0 0 27017 0.0.0 0 15 * LISTEN 4867/mongod
Tcp 0 0 0.0.0 0 27018 0.0.0 0 15 * LISTEN 4899/mongod
(4) check whether each instance can enter the mongodb database
[root@localhost logs] # mongo
MongoDB shell version v3.6.7
Connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.7
[root@localhost logs] # mongo-- port 27018
MongoDB shell version v3.6.7
Connecting to: mongodb://127.0.0.1:27018/
MongoDB server version: 3.6.7
Server has startup warnings:
[root@localhost logs] # mongo-- port 27019
MongoDB shell version v3.6.7
Connecting to: mongodb://127.0.0.1:27019/
MongoDB server version: 3.6.7
Server has startup warnings:
[root@localhost logs] # mongo-- port 27020
MongoDB shell version v3.6.7
Connecting to: mongodb://127.0.0.1:27020/
MongoDB server version: 3.6.7
2. Configure the priority of the replication set
(1) Log in to the default instance mongo, reconfigure the 4-node MongoDB replication set, and set up two standard nodes, one passive node and one arbitration node. The command is as follows:
The node is determined according to the priority: the priority 100 is the standard node, the port number is 27017 and 27018, the priority 0 is the passive node, the port number is 27019, and the arbitration node is 27020
> cfg= {"_ id": "abc", "members": [{"_ id": 0, "host": "192.168.213.184 host 27017", "priority": 100}, {"_ id": 1, "host": "192.168.213.184abc 27018", "priority": 100}, {"_ id": 2, "host": "192.168.213.184 abc 27019", "priority": 0}, {"_ id": 3 "host": "192.168.213.184 true 27020", "arbiterOnly": true}]}
{
"_ id": "abc"
"members": [
{
"_ id": 0
"host": "192.168.213.184 27017", # standard node, priority is 100
"priority":
}
{
"_ id": 1
"host": "192.168.213.184purl 27018", # standard node
"priority":
}
{
"_ id": 2
"host": "192.168.213.184 27019", # passive node, priority 0
"priority": 0
}
{
"_ id": 3
"host": "192.168.213.184 27020", # Arbitration Node
"arbiterOnly": true
}
]
}
> rs.initiate (cfg) # initialize the replication set
{
"ok": 1
OperationTime: Timestamp (1536819122, 1)
"$clusterTime": {
ClusterTime: Timestamp (1536819122, 1)
"signature": {
"hash": BinData (0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA=")
"keyId": NumberLong (0)
}
}
}
(2) use the command rs.isMaster () to view the identity of each node
Abc:PRIMARY > rs.isMaster ()
{
"hosts": [# Standard node
"192.168.213.184purl 27017"
"192.168.213.184purl 27018"
]
"passives": [# passive node
"192.168.213.184purl 27019"
]
"arbiters": [# Arbitration node
"192.168.213.184purl 27020"
]
"setName": "abc"
"setVersion": 1
"ismaster": true
"secondary": false
"primary": "192.168.213.184purl 27017"
(3) at this time, the default instance has been elected as the primary node, and information can be inserted into the database.
Abc:PRIMARY > use school # active node (primory)
Switched to db school
Abc:PRIMARY > db.info.insert ({"id": 1, "name": "lili"}) # insert two messages
WriteResult ({"nInserted": 1})
Abc:PRIMARY > db.info.insert ({"id": 2, "name": "tom"})
WriteResult ({"nInserted": 1})
(4) View log records of all operations, and view them in oplog.rs in the default database local.
Abc:PRIMARY > show dbs
Admin 0.000GB
Config 0.000GB
Local 0.000GB
School 0.000GB
Abc:PRIMARY > use local # enter local database
Switched to db local
Abc:PRIMARY > show collections # display
Me
Oplog.rs
Replset.election
Replset.minvalid
Startup_log
System.replset
System.rollback.id
Abc:PRIMARY > db.oplog.rs.find () # View logging all operations
Through logging, you can find the two pieces of information you just added
School.info, ui: UUID ("2b9c93b6-a58a-4021-a2b4-33f9b19925d8"), "wall": ISODate ("2018-09-13T06:24:44.537Z"), "o": {"_ id": ObjectId ("5b9a02aca106c3eab9c639e5"), "id": 1, "name": "lili"}}
{"ts": Timestamp (1536819899, 1), "t": NumberLong (1), "h": NumberLong ("- 1447313909384631008"), "v": 2, "op": "I", "ns": "school.info", "ui": UUID ("2b9c93b6-a58a-4021-a2b4-33f9b19925d8"), "wall": ISODate ("201809-13T06:24:59.186Z") "o": {"_ id": ObjectId ("5b9a02bba106c3eab9c639e6"), "id": 2, "name": "tom"}}
3. Simulate node failure, if the primary node fails, another standard node will be elected as the new primary node.
(1) check the current status first, and use the command rs.status ()
Abc:PRIMARY > rs.status ()
"members": [
{
"_ id": 0
"name": "192.168.213.184 27017", # standard node, port 27017, as the primary node
"health": 1
"state": 1
"stateStr": "PRIMARY"
"uptime": 2481
"optime": {
Ts: Timestamp (1536820334, 1)
"t": NumberLong (1)
}
{
"_ id": 1
"name": "192.168.213.184 27018", # standard node, port 27018 is slave node
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 1213
"optime": {
Ts: Timestamp (1536820324, 1)
"t": NumberLong (1)
}
{
"_ id": 2
"name": "192.168.213.184 27019", # passive node, port 27019-bit slave node
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 1213
"optime": {
Ts: Timestamp (1536820324, 1)
"t": NumberLong (1)
}
{
"_ id": 3
"name": "192.168.213.184 27020", # Arbitration Node
"health": 1
"state": 7
"stateStr": "ARBITER"
"uptime": 1213
LastHeartbeat: ISODate ("2018-09-13T06:32:14.152Z")
(2) simulate the failure of the primary node, turn off the service of the primary node, log in to port 27018 of another standard node, and check whether a standard node is elected as the primary node.
Abc:PRIMARY > exit
Bye
[root@localhost logs] # mongod-f / etc/mongod.conf-shutdown # turn off the primary node service
Killing process with pid: 4821
[root@localhost logs] # mongo-- port 27018 # specify port 27018 to enter the database
Abc:PRIMARY > rs.status () # View status
"members": [
{
"_ id": 0
"name": "192.168.213.184purl 27017"
"health": 0, # healthy is 0, which means that port 27017 is down.
"state": 8
"stateStr": "(not reachable/healthy)
"uptime": 0
"optime": {
"ts": Timestamp (0,0)
"t": NumberLong (- 1)
}
{
"_ id": 1
"name": "192.168.213.184 27018", # at this time another standard node is elected as the master node with port 27018
"health": 1
"state": 1
"stateStr": "PRIMARY"
"uptime": 2812
"optime": {
Ts: Timestamp (1536820668, 1)
"t": NumberLong (2)
}
{
"_ id": 2
"name": "192.168.213.184purl 27019"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 1552
"optime": {
Ts: Timestamp (1536820668, 1)
"t": NumberLong (2)
}
{
"_ id": 3
"name": "192.168.213.184purl 27020"
"health": 1
"state": 7
"stateStr": "ARBITER"
"uptime": 1552
A health of 1 indicates health and 0 indicates downtime. A state of 1 represents the master node and 2 represents the slave node.
(3) turn off all standard node services to see if the passive node will be elected as the primary node.
Abc:PRIMARY > exit
Bye
[root@localhost logs] # mongod-f / etc/mongod2.conf-shutdown # turn off the primary node service
Killing process with pid: 4853
[root@localhost logs] # mongo-- port 27019 # enable backup node service
Abc:SECONDARY > rs.status ()
"members": [
{
"_ id": 0
"name": "192.168.213.184 27017", # both standard nodes are in a state of downtime, and the health value is 0
"health": 0
"state": 8
"stateStr": "(not reachable/healthy)
"uptime": 0
"optime": {
"ts": Timestamp (0,0)
"t": NumberLong (- 1)
}
{
"_ id": 1
"name": "192.168.213.184purl 27018"
"health": 0
"state": 8
"stateStr": "(not reachable/healthy)
"uptime": 0
"optime": {
"ts": Timestamp (0,0)
"t": NumberLong (- 1)
}
{
"_ id": 2
"name": "192.168.213.184 27019", # the passive node has not been elected as the primary node, indicating that the passive node cannot become an active node
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 3102
"optime": {
Ts: Timestamp (1536820928, 1)
"t": NumberLong (2)
}
{
"_ id": 3
"name": "192.168.213.184purl 27020"
"health": 1
"state": 7
"stateStr": "ARBITER"
"uptime": 1849
(4) start the standard node service again to see whether the primary node can be restored.
Abc:SECONDARY > exit
Bye
[root@localhost logs] # mongod-f / etc/mongod.conf # start the service of instance 1
About to fork child process, waiting until server is ready for connections.
Forked process: 39839
Child process started successfully, parent exiting
[root@localhost logs] # mongod-f / etc/mongod2.conf # start the service of instance 2
About to fork child process, waiting until server is ready for connections.
Forked process: 39929
Child process started successfully, parent exiting
[root@localhost logs] # mongo
{
"_ id": 0
"name": "192.168.213.184 27017", # port 27017 resumes the primary node, (related to the startup sequence of the standard node, the primary node that starts first)
"health": 1
"state": 1
"stateStr": "PRIMARY"
"uptime": 25
"optime": {
Ts: Timestamp (1536821324, 1)
"t": NumberLong (3)
}
{
"_ id": 1
"name": "192.168.213.184purl 27018"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 14
"optime": {
Ts: Timestamp (1536821324, 1)
"t": NumberLong (3)
}
{
"_ id": 2
"name": "192.168.213.184purl 27019"
"health": 1
"state": 2
"stateStr": "SECONDARY"
"uptime": 23
"optime": {
{
"_ id": 3
"name": "192.168.213.184purl 27020"
"health": 1
"state": 7
"stateStr": "ARBITER"
"uptime": 23
It can be seen that only the standard node may be elected as the active node (the primary primary), while the passive node can not become the active node and has the right to vote. It is impossible for the arbitration node to become active.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.