In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
When the amount of data or requests is large, and direct indexing has a significant impact on performance, you can make use of replication sets (when the amount of data is large, it is generally an online environment, the use of replication sets is an inevitable choice or shards are used.) The downtime of some machines does not affect the working characteristics of the replication set, and then the index is established.
Note: the table that adds the index uses the WT engine and the amount of data is about 150 million.
1. Replica set configuration parameters
Node 1:
$more shard1.conf
Dbpath=/data/users/mgousr01/mongodb/dbdata/shard1_1
Logpath=/data/users/mgousr01/mongodb/logs/shard1_1.log
Pidfilepath=/data/users/mgousr01/mongodb/dbdata/shard1_1/shard1-1.pid
Directoryperdb=true
Logappend=true
ReplSet=shard1
Shardsvr=true
Bind_ip=127.0.0.1,x.x.x.x
Port=37017
OplogSize=9024
Fork=true
# noprealloc=true
# auth=true
Journal=true
Profile=1
Slowms=10
MaxConns=12000
StorageEngine = wiredTiger
WiredTigerCacheSizeGB=96
# clusterAuthMode=keyFile
KeyFile=/data/users/mgousr01/mongodb/etc/keyFilers0.key
WiredTigerDirectoryForIndexes=on
WiredTigerCollectionBlockCompressor=zlib
WiredTigerJournalCompressor=zlib
Node 2:
$more shard2.conf
Dbpath=/data/users/mgousr01/mongodb/dbdata/shard2_1
Logpath=/data/users/mgousr01/mongodb/logs/shard2_1.log
Pidfilepath=/data/users/mgousr01/mongodb/dbdata/shard2_1/shard2-1.pid
Directoryperdb=true
Logappend=true
ReplSet=shard1
Shardsvr=true
Bind_ip=127.0.0.1,x.x.x.x
Port=37017
OplogSize=9024
Fork=true
# noprealloc=true
# auth=true
Journal=true
Profile=1
Slowms=10
MaxConns=12000
StorageEngine = wiredTiger
WiredTigerCacheSizeGB=96
# clusterAuthMode=keyFile
KeyFile=/data/users/mgousr01/mongodb/etc/keyFilers0.key
WiredTigerDirectoryForIndexes=on
WiredTigerCollectionBlockCompressor=zlib
WiredTigerJournalCompressor=zlib
Node 3:
[mgousr01@pre-mongo-main-01 etc] $more shard3.conf
Dbpath=/data/users/mgousr01/mongodb/dbdata/shard3_1
Logpath=/data/users/mgousr01/mongodb/logs/shard3_1.log
Pidfilepath=/data/users/mgousr01/mongodb/dbdata/shard3_1/shard3-1.pid
Directoryperdb=true
Logappend=true
ReplSet=shard1
Shardsvr=true
Bind_ip=127.0.0.1,x.x.x.x
Port=37017
OplogSize=9024
Fork=true
# noprealloc=true
# auth=true
Journal=true
Profile=1
Slowms=10
MaxConns=12000
StorageEngine = wiredTiger
WiredTigerCacheSizeGB=96
# clusterAuthMode=keyFile
KeyFile=/data/users/mgousr01/mongodb/etc/keyFilers0.key
WiredTigerDirectoryForIndexes=on
WiredTigerCollectionBlockCompressor=zlib
WiredTigerJournalCompressor=zlib
two。 Start mongodb
Mongod-f start
3. Configure replica set command (log in to any host)
Config= {_ id:'shard1',members: [{_ id:0,host:'x.x.x.x:37017',priority:1,tags: {'use':'xxx'}}, {_ id:1,host:'x.x.x.x:37017',priority:1,tags: {' use':'xxx'}}, {_ id:2,host:'x.x.x.x:37017',priority:1,tags: {'use':'xxx'}}]}
Rs.initiate (config)
4. On the simulation line of writing operation in primary library
For (iTuno use chicago)
Build the Index
> db.users.createIndex ({username:1,created:1}, {unique:true}, {name: "username_created_unique"}, {background:true})
[it is recommended that there should be a naming convention for creating indexes in the future]
(5) View index information
> db.users.getIndexes ()
[
{
"v": 1
"key": {
"_ id": 1
}
"name": "_ id_"
"ns": "chicago.users"
}
{
"v": 1
"unique": true
"key": {
"username": 1
"created": 1
}
"name": "username_1_created_1"
"ns": "chicago.users"
}
]
(6) stop the mongod process of the replica set again
$pwd
/ data/users/mgousr01/mongodb/etc
$mongod-f shard3.conf-- shutdown
(7) start the mongod process
Remove the replSet=shard1 comments from the shard3.conf configuration file; then start
Mongod-f shard3.conf
Mongo ip:37017/admin
After startup, the node is added to the replica set, and then the primary data is synchronized, and the index on the secondary does not affect the primary and cause the primary inconsistency to occur.
7. Operation on the second copy of secondary-Node 2
Just repeat step 6.
8. General steps for building all secondarys indexes
For each secondary in the set, build an index according to the following steps:
(1) Stop One Secondary
(2) Build the Index
(3) Restart the Program mongod
9. Build primary node index
(1) Log in to the primary node
Mongo ip:37017/admin
(2) downgrade the primary node
Shard1:PRIMARY > rs.stepDown (30)
2016-04-19T12:49:44.423+0800 I NETWORK DBClientCursor::init call () failed
2016-04-19T12:49:44.426+0800 E QUERY Error: error doing query: failed
At DBQuery._exec (src/mongo/shell/query.js:83:36)
At DBQuery.hasNext (src/mongo/shell/query.js:240:10)
At DBCollection.findOne (src/mongo/shell/collection.js:187:19)
At DB.runCommand (src/mongo/shell/db.js:58:41)
At DB.adminCommand (src/mongo/shell/db.js:66:41)
At Function.rs.stepDown (src/mongo/shell/utils.js:1006:15)
At (shell): 1:4 at src/mongo/shell/query.js:83
2016-04-19T12:49:44.427+0800 I NETWORK trying reconnect to xxxx failed
2016-04-19T12:49:44.428+0800 I NETWORK reconnect xxxx ok
Shard1:SECONDARY >
After the demotion command is executed, it will actively become a secondary node. One of the above two secondary nodes will be elected as the primary node.
(3) the subsequent method of building the index is the same as step 6.
Description:
Downgrade primary: rs.stepDown (downseconds=60), primary will not participate in the election during the downgrade period, if the replica set still does not have primary after the downgrade time limit, it will participate in the election.
Preventing Electoins
Keep the secondaries in its state: rs.freeze (seconds), so that you can deal with the primary within that time without worrying about the election.
Unlock status: rs.freeze (0).
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.