In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Introduction to the preface
The principle of replication:
The replication operation is based on oplog, similar to bin-log in mysql, and records only the records that have changed.
The principle of the election:
Nodes are divided into standard nodes, passive nodes and arbitration nodes.
Standard nodes (high priority value): only standard nodes can become primary
Passive node (low priority value): passive node can only be secondary
Arbitration node: cannot copy data, cannot become an active point, and only has the right to vote
Election result: the person with the highest number of votes wins; if the number of votes is the same, the newcomer wins.
I. introduction of replica set election experiment
Experimental procedure
View oplog Log
Configure the priority of the replication set
Simulate the failure of the primary node
Simulate all standard node failures
Second, copy set election experiment content
-- View oplog logs-
> use schoolswitched to db school > db.info.insert ({"id": 1, "name": "tom"}) WriteResult ({"nInserted": 1}) > db.info.find () {"_ id": ObjectId ("5b9a0873692de658bd931c64"), "id": 1, "name": "tom"} > use localswitched to db local > show collectionsmeoplog.rs... > db.oplog.rs.find () {"ts": Timestamp (1536723445, 3), … : {"create": "transactions", "idIndex": {"v": 2, "key": {"_ id": 1}, "name": "_ id_", "ns": "config.transactions"} {"ts": Timestamp (1536723445, 5), … : {"create": "system.keys", "idIndex": {"v": 2, "key": {"_ id": 1}, "name": "_ id_", "ns": "admin.system.keys"}
-configure the priority of replication sets-
Cfg= {"_ id": "yandada", "members": [{"_ id": 0, "host": "192.168.218.149 id 27017", "priority": 100}, {"_ id": 1, "host": "192.168.218.149 id 27018", "priority": 100}, {"_ id": 2, "host": "192.168.218.149 members" 27019 "," priority ": 0}, {" _ id ": 3 "host": "192.168.218.149 true 27020", "arbiterOnly": true}]}
Rs.reconfig (cfg)
{"ok": 1} / / when OK:1 is displayed, the node is configured successfully
Rs.status () / / View status information
Rs.isMaster () / / View node information
{/ / the display information is as follows
"hosts": [
"192.168.218.149Disc 27017"
"192.168.218.149Frey 27018"
]
"passives": [
"192.168.218.149Frey 27019"
]
"arbiters": [
"192.168.218.149Frey 27020"
]
-simulate the failure of the primary node
Shut down the primary node server
Yandada:PRIMARY > use admin # enter the admin collection to proceed to the next step
Switched to db admin
Yandada:PRIMARY > db.shutdownServer () # shut down the server
Server should be down...
The above operation is equivalent to [root@yandada3] # mongod-f / etc/mongod.conf-- shutdown
[root@yandada3] # mongo-- port 27018
Yandada:PRIMARY > rs.status ()
After viewing the status information, you will find that the MongoDB replication set elects the second standard node as the primary node.
-- simulate all standard node failures
[root@yandada3] # mongod-f / etc/mongod.conf-- shutdown
Killing process with pid: 4238
[root@yandada3] # mongo-- port 27019
Yandada:SECONDARY > rs.status ()
After viewing the status information, you will find that there is no primary node, and the passive node cannot become the primary node.
III. Brief introduction to replication set management
1. Configuration allows data to be read from nodes
Yandada:SECONDARY > rs.slaveOk ()
two。 View replication set status information
Rs.help ()
Yandada:PRIMARY > rs.printReplicationInfo ()
Configured oplog size: 990MB # oplog storage size is 990MB
Log length start to end: 101403secs (28.17hrs)
Oplog first event time: Wed Sep 12 2018 11:37:13 GMT+0800 (CST)
Oplog last event time Thu Sep 13 2018 15:47:16 GMT+0800 (CST)
Now Thu Sep 13 2018 15:47:17 GMT+0800 (CST)
Yandada:PRIMARY > rs.printSlaveReplicationInfo ()
Source: 192.168.218.149:27018
SyncedTo Thu Sep 13 2018 15:47:26 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
Source: 192.168.218.149:27019
SyncedTo Thu Sep 13 2018 15:47:26 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
3. Change the oplog siz
1. First step, exit the replication set
Yandada:PRIMARY > use admin
Switched to db admin
Yandada:PRIMARY > db.shutdownServer ()
Server should be down...
two。 The second step is to change the port (the source port is included in the replication set), close the replication set name in the configuration file, and start mongod
Vim / etc/mongod.conf
Net:
Port: 2
# replication:
# replSetName: yandada
Mongod-f / etc/mongod.conf
3. Step 3: change the oplog size
[root@yandada3] # mongo-- port 27028
> use local
Switched to db local
> db.oplog.rs.drop ()
True
> db.runCommand ({create: "oplog.rs", capped:true,size: (2,048,2048,2048)})
{"ok": 1}
[root@yandada3] # mongod-f / etc/mongod.conf-- shutdown
Killing process with pid: 8296
[root@yandada3 ~] # vim / etc/mongod.conf
Net:
Port: 27017
Replication:
ReplSetName: yandada
OplogSizeMB: 16384
[root@yandada3] # mongod-f / etc/mongod.conf
[root@yandada3 ~] # mongo
Yandada:SECONDARY > rs.printReplicationInfo ()
Configured oplog size: 16384MB # oplog size changed to 16G
4. Certified deployment
1. The first step is to create a certified user
Yandada:PRIMARY > use admin
Switched to db admin
Yandada:PRIMARY > db.createUser ({"user": "root", "pwd": "123"," roles ": [" root "]})
Successfully added user: {"user": "root", "roles": ["root"]}
two。 Step 2, edit the authentication configuration
Vim / etc/mongod.conf
Security:
KeyFile: / usr/bin/kgcrskey1
ClusterAuthMode:keyFile note: need to be flush with the previous line
In the same way, modify the mongod [2je 3je 4] file.
[root@yandada3 ~] # echo "kgcrskey" > / usr/bin/kgcrskey [1, 2, 3, 4]
[root@yandada3 bin] # echo "kgcrskey" > / usr/bin/kgcrskey1
[root@yandada3 bin] # echo "kgcrskey" > / usr/bin/kgcrskey2
[root@yandada3 bin] # echo "kgcrskey" > / usr/bin/kgcrskey3
[root@yandada3 bin] # echo "kgcrskey" > / usr/bin/kgcrskey4
[root@yandada3 bin] # chmod 600 / usr/bin/kgcrskey {1,2,3,4}
3. Step 3: restart the service
[root@yandada3 bin] # mongod-f / etc/mongod.conf-- shutdown
[root@yandada3 bin] # mongod-f / etc/mongod.conf
By the same token, restart mongod [2pr 3pr 4]
4. Step 4: check the configuration status
Yandada:PRIMARY > show dbs
"ok": 0 # No right to view
Yandada:PRIMARY > use admin
Switched to db admin
Yandada:PRIMARY > db.auth ("root", "123")
1 # the returned value is 1, which indicates that the authorization is successful
Yandada:PRIMARY > show dbs
Admin 0.000GB
Config 0.000GB
Local 0.000GB
School 0.000GB
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.