Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Practice of mongodb replication set + sharding production environment

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

The operating system environments of the three machines are as follows:

[mongodb@node1 ~] $cat / etc/issueRed Hat Enterprise Linux Server release 6.6 (Santiago) Kernel\ r on an\ m [mongodb@node1 ~] $uname-r2.6.32-504.el6.x86_64 [mongodb@node1 ~] $uname-mx86_64

The architecture is shown below:

Text description: 1

192.168.42.41, shard1:10001, shard2:10002, shard3:10003, configsvr:10004, mongos:10005 Note: shard1 Master Node, shard2 Arbitration, shard3 copy 192.168.42.42, shard1:10001, shard2:10002, shard3:10003, configsvr:10004, mongos:10005 Note: shard1 copy, shard2 Master Node, shard3 Arbitration 192.168.42.43, shard1:10001, shard2:10002, shard3:10003, configsvr:10004, mongos:10005 Note: shard1 Arbitration, shard2 copy Shard3 master node node1:192.168.42.41node2:192.168.42.42node3:192.168.42.43

Create a mongodb user

[root@node1 ~] # groupadd mongodb [root@node1 ~] # useradd-g mongodb mongodb [root@node1 ~] # mkdir / data [root@node1 ~] # chown mongodb.mongodb / data-R [root@node1 ~] # su-mongodb

Create directories and files

[mongodb@node1 ~] $mkdir / data/ {config,shard1,shard2,shard3,mongos,logs,configsvr,keyfile}-pv [mongodb@node1 ~] $touch / data/keyfile/zxl [mongodb@node1 ~] $touch / data/logs/shard {1.. 3} .log [mongodb@node1 ~] $touch / data/logs/ {configsvr,mongos} .log [mongodb@node1 ~] $touch / data/config/shard {1.. 3} .conf [mongodb@node1 ~] $touch / data/config/ {configsvr,mongos} .conf

Download mongodb

[mongodb@node1] $wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel62-3.2.3.tgz[mongodb@node3 ~] $tar fxz mongodb-linux-x86_64-rhel62-3.2.3.tgz-C / data [mongodb@node3 ~] $ln-s / data/mongodb-linux-x86_64-rhel62-3.2.3 / data/mongodb

Configure mongodb environment variables

[mongodb@node1 ~] $echo "export PATH=$PATH:/data/mongodb/bin" > ~ / .bash_ profile [Mongodb @ node1 data] $source ~ / .bash_profile

The shard1.conf configuration file is as follows:

[mongodb@node1 ~] $cat / data/config/shard1.conf

SystemLog: destination: file path: / data/logs/shard1.log logAppend: trueprocessManagement: fork: true pidFilePath: "/ data/shard1/shard1.pid" net: port: 10001storage: dbPath: "/ data/shard1" engine: wiredTiger journal: enabled: true directoryPerDB: trueoperationProfiling: slowOpThresholdMs: 10 mode: "slowOp" # security:# keyFile: "/ data/keyfile/zxl" # clusterAuthMode: "keyFile" replication: oplogSizeMB: 50 replSetName: "shard1_zxl" secondaryIndexPrefetch: "all"

The shard2.conf configuration file is as follows:

[mongodb@node1 ~] $cat / data/config/shard2.conf

SystemLog: destination: file path: / data/logs/shard2.log logAppend: trueprocessManagement: fork: true pidFilePath: "/ data/shard2/shard2.pid" net: port: 10002storage: dbPath: "/ data/shard2" engine: wiredTiger journal: enabled: true directoryPerDB: trueoperationProfiling: slowOpThresholdMs: 10 mode: "slowOp" # security:# keyFile: "/ data/keyfile/zxl" # clusterAuthMode: "keyFile" replication: oplogSizeMB: 50 replSetName: "shard2_zxl" secondaryIndexPrefetch: "all"

The shard3.conf configuration file is as follows:

[mongodb@node1 ~] $cat / data/config/shard3.conf

SystemLog: destination: file path: / data/logs/shard3.log logAppend: trueprocessManagement: fork: true pidFilePath: "/ data/shard3/shard3.pid" net: port: 10003storage: dbPath: "/ data/shard3" engine: wiredTiger journal: enabled: true directoryPerDB: trueoperationProfiling: slowOpThresholdMs: 10 mode: "slowOp" # security:# keyFile: "/ data/keyfile/zxl" # clusterAuthMode: "keyFile" replication: oplogSizeMB: 50 replSetName: "shard3_zxl" secondaryIndexPrefetch: "all"

The configsvr.conf configuration file is as follows:

[mongodb@node1 ~] $cat / data/config/configsvr.conf

SystemLog: destination: file path: / data/logs/configsvr.log logAppend: trueprocessManagement: fork: true pidFilePath: "/ data/configsvr/configsvr.pid" net: port: 10004storage: dbPath: "/ data/configsvr" engine: wiredTiger journal: enabled: true#security:# keyFile: "/ data/keyfile/zxl" # clusterAuthMode: "keyFile" sharding: clusterRole: configsvr

The mongos.conf configuration file is as follows:

[mongodb@node3 ~] $cat / data/config/mongos.conf

SystemLog: destination: file path: / data/logs/mongos.log logAppend: trueprocessManagement: fork: true pidFilePath: / data/mongos/mongos.pidnet: port: 10005sharding: configDB: 192.168.42.41:10004192.168.42.42:10004192.168.42.43:10004#security:# keyFile: "/ data/keyfile/zxl" # clusterAuthMode: "keyFile"

Note: the above operations are only performed on node1 machines. Please operate the above steps on the other two machines, including creating user-created directory files and installing mongodb, and copying the files to the corresponding directories of node2 and node3. After copying, check whether the master group of the file is mongodb.

Start the mongod,shard1, shard2, shard3 of each machine node

[mongodb@node1 ~] $mongod-f / data/config/shard1.confmongod: / usr/lib64/libcrypto.so.10: no version information available (required by mmongod: / usr/lib64/libcrypto.so.10: no version information available (required by mmongod: / usr/lib64/libssl.so.10: no version information available (required by mongmongod: relocation error: mongod: symbol TLSv1_1_client_method, version libssl.so.1n file libssl.so.10 with link time reference)

Note: unable to start, after seeing the corresponding prompt

Solution: install openssl, all three machines install openssl-devel,redhat6.6 basically does not exist this situation, centos may encounter.

[mongodb@node1 ~] $su-rootPassword: [root@node1 ~] # yum install openssl-devel-y

Switch mongodb again. Users start mongod,shard1, shard2, and shard3 on three machines.

[mongodb@node1] $mongod-f / data/config/shard1.confabout to fork child process, waiting until server is ready for connections.forked process: 1737child process started successfully, parent exiting [mongodb@node1 ~] $mongod-f / data/config/shard2.confabout to fork child process, waiting until server is ready for connections.forked process: 1760child process started successfully, parent exiting [mongodb@node1 ~] $mongod-f / data/config/shard3.confabout to fork child process, waiting until server is ready for connections.forked process: 1783child process started successfully, parent exiting

Note: if there is an error in this step, you can check his prompt or the log of / data/logs/shard {1. 3} .log.

Log in to mongod:10001 on the node1 machine

[mongodb@node1 ~] $mongo-- port 10001MongoDB shell version: 3.2.3connecting to: 127.0.0.1:10001/testWelcome to the MongoDB shell.For interactive help, type "help" .for more comprehensive documentation, see http://docs.mongodb.org/Questions? Try the support group http://groups.google.com/group/mongodb-userServer has startup warnings: 2016-03-08T13:28:18.508+0800 I CONTROL [initandlisten] 2016-03-08T13:28:18.508+0800 I CONTROL [initandlisten] * * WARNING: / sys/kernel/mm/epage/enabled is' always'.2016-03-08T13:28:18.508+0800 I CONTROL [initandlisten] * * We suggest settin2016-03-08T13:28:18.508+0800 I CONTROL [initandlisten] 2016-03-08T13:28:18.508+0800 I CONTROL [initandlisten] * * WARNING: / sys/kernel/mm/epage/defrag is' always'.2016-03-08T13:28:18.508+0800 I CONTROL [initandlisten] * * We suggest settin2016-03-08T13:28:18.508+0800 I CONTROL [initandlisten]

Note: prompt warning.

Solution: you can operate the content on all three machines.

[mongodb@node2 config] $su-rootPassword [root@node1 ~] # echo never > / sys/kernel/mm/transparent_hugepage/ enabled [root @ node1 ~] # echo never > / sys/kernel/mm/transparent_hugepage/defrag

Be sure to shut down the mongod instance on the three machines, and then start the mongod instance on the three machines again, otherwise it will not take effect.

[mongodb@node1 ~] $netstat-ntpl | grep mongo | awk'{print $NF}'| awk-print $1}'| xargs kill [mongodb@node1 ~] $mongod-f / data/config/shard1.conf [mongodb@node1 ~] $mongod-f / data/config/shard2.conf [mongodb@node1 ~] $mongod-f / data/config/shard3.conf to see if the port exists: [mongodb@elk_node1 ~] $ss-atunlp | grep mongtcp LISTEN 0128 *: 10001 *: * users: ("mongod" ) tcp LISTEN 0 128 *: 10002 *: * users: (("mongod", 2654 Pere6)) tcp LISTEN 0 128 *: 10003 *: * users: (("mongod", 2678))

Configure replication set

Operation configuration replication set on node1 machine

[mongodb@node1 config] $mongo-- port 10001MongoDB shell version: 3.2.3connecting to: 127.0.0.1:10001/test > use adminswitched to db admin > config = {_ id: "shard1_zxl", members: [. {_ id:0,host: "192.168.42.41 port 10001MongoDB shell version 10001"},. {_ id:1,host: "192.168.42.42 port 10001MongoDB shell version 10001"},. {_ id:2 Host: "192.168.42.43 10001", arbiterOnly:true}...]. .} {"_ id": "shard1_zxl", "members": [{"_ id": 0, "host": "192.168.42.41 id 10001"}, {"_ id": 1, "host": "192.168.42.42members 10001"}, {"_ id": 2, "host": "192.168.42.43VR 10001" "arbiterOnly": true}]} > rs.initiate (connectionURLTheSame (constructor > rs.initiate (config) {"ok": 1})

Operation configuration replication set on node2 machine

[mongodb@node2 config] $mongo-- port 10002MongoDB shell version: 3.2.3connecting to: 127.0.0.1:10002/testWelcome to the MongoDB shell.For interactive help, type "help" .for more comprehensive documentation, see http://docs.mongodb.org/Questions? Try the support group http://groups.google.com/group/mongodb-user> use adminswitched to db admin > config = {_ id: "shard2_zxl", members: [. {_ id:0,host: "192.168.42.42 config 10002"},. {_ id:1,host: "192.168.42.43 purl 10002"},. {_ id:2,host: "192.168.42.4110002" ArbiterOnly:true}......]... .} {"_ id": "shard2_zxl", "members": [{"_ id": 0, "host": "192.168.42.42 id 10002"}, {"_ id": 1, "host": "192.168.42.43 members 10002"}, {"_ id": 2, "host": "192.168.42.41 Swiss 10002" "arbiterOnly": true}]} > rs.initiate (config) {"ok": 1}

Operation configuration replication set on node3 machine

[mongodb@node3 config] $mongo-- port 10003MongoDB shell version: 3.2.3connecting to: 127.0.0.1:10003/testWelcome to the MongoDB shell.For interactive help, type "help" .for more comprehensive documentation, see http://docs.mongodb.org/Questions? Try the support group http://groups.google.com/group/mongodb-user> use adminswitched to db admin > config = {_ id: "shard3_zxl", members: [. {_ id:0,host: "192.168.42.43 use adminswitched to db admin 10003"},. {_ id:1,host: "192.168.42.41 Swiss 10003"},. {_ id:2,host: "192.168.42.4210003" ArbiterOnly:true}......]... .} {"_ id": "shard3_zxl", "members": [{"_ id": 0, "host": "192.168.42.43 members 10003"}, {"_ id": 1, "host": "192.168.42.41Swiss 10003"}, {"_ id": 2, "host": "192.168.42.42bure10003" "arbiterOnly": true}]} > rs.initiate (config) {"ok": 1}

Note: the above is to configure rs replication sets. Related commands, such as: rs.status (), view the status of each replication set.

Start the configsvr and mongos nodes on the three machines

[mongodb@node1 logs] $mongod-f / data/config/configsvr.confabout to fork child process, waiting until server is ready for connections.forked process: 6317child process started successfully, parent exiting [mongodb@node1 logs] $mongos-f / data/config/mongos.confabout to fork child process, waiting until server is ready for connections.forked process: 6345child process started successfully, parent exiting

Check to see if the port is started:

[mongodb@elk_node1 ~] $ss-atunlp | grep mongtcp LISTEN 0 128 *: 10001 *: * users: ("mongod", 2630 mongod 6) tcp LISTEN 0 128 *: 10002 *: * users: (("mongod") 2654 tcp LISTEN 6)) tcp LISTEN 0 128 *: 10003 *: * users: (("mongod", 2678)) tcp LISTEN 0 128 *: 10004 *: * users: (("mongod") 3053 tcp LISTEN 6)) tcp LISTEN 0 128 *: 10005 *: * users: (("mongos", 3157 ~ 21))

Configure shard fragmentation

Configure shard fragmentation on a node1 machine

[mongodb@node1 config] $mongo-- port 10005MongoDB shell version: 3.2.3connecting to: 127.0.0.1:10005/testmongos > use adminswitched to db adminmongos > db.runCommand ({addshard: "shard1_zxl/192.168.42.41:10001192.168.42.42:10001192.168.42.43:10001"}) {"shardAdded": "shard1_zxl", "ok": 1} mongos > db.runCommand ({addshard: "shard2_zxl/192.168.42.41:10002192.168.42.42:10002192.168.42.43:10002"}); {"shardAdded": "shard2_zxl", "ok": 1} mongos > db.runCommand ({addshard: "shard3_zxl/192.168.42.41:10003192.168.42.42:10003192.168.42.43:10003"}) {"shardAdded": "shard3_zxl", "ok": 1}

View shard information

Mongos > sh.status ()-Sharding Status-sharding version: {"_ id": 1, "minCompatibleVersion": 5, "currentVersion": 6, "clusterId": ObjectId ("56de6f4176b47beaa9c75e9d")} shards: {"_ id": "shard1_zxl", "host": "shard1_zxl/192.168.42.41:10001192.168.42.42:10001"} {"_ id": "shard2_zxl" "host": "shard2_zxl/192.168.42.42:10002192.168.42.43:10002"} {"_ id": "shard3_zxl", "host": "shard3_zxl/192.168.42.41:10003192.168.42.43:10003"} active mongoses: "3.2.3": 3 balancer:Currently enabled: yesCurrently running: noFailed balancer rounds in last 5 attempts: 0Migration Results for the last 24 hours: No recent migrations databases:

View shard status

Mongos > db.runCommand ({listshards: 1}) {"shards": [{"_ id": "shard1_zxl", "host": "shard1_zxl/192.168.42.41:10001192.168.42.42:10001"}, {"_ id": "shard2_zxl", "host": "shard2_zxl/192.168.42.42:10002192.168.42.43:10002"}, {"_ id": "shard3_zxl" "host": "shard3_zxl/192.168.42.41:10003192.168.42.43:10003"}], "ok": 1}

The name of a library with shard fragmentation enabled is' zxl','.

Mongos > sh.enableSharding ("zxl") {"ok": 1}

Set the name and field of the collection, automatically index by default, zxl library, collection

Mongos > sh.shardCollection ("zxl.", {age: 1, name: 1}) {"collectionsharded": "zxl.", "ok": 1}

Simulate inserting 10000 data into the collection

The command mongos > for (iDeploy1bot I sh.status ()) checks the status of each shard shard. This is the completion of the replication set and shard shard construction.

The results are as follows:

Mongos > sh.status ()-Sharding Status-sharding version: {"_ id": 1, "minCompatibleVersion": 5, "currentVersion": 6, "clusterId": ObjectId ("56de6f4176b47beaa9c75e9d")} shards: {"_ id": "shard1_zxl", "host": "shard1_zxl/192.168.42.41:10001192.168.42.42:10001"} {"_ id": "shard2_zxl" "host": "shard2_zxl/192.168.42.42:10002192.168.42.43:10002"} {"_ id": "shard3_zxl" "host": "shard3_zxl/192.168.42.41:10003192.168.42.43:10003"} active mongoses: "3.2.3": 3 balancer:Currently enabled: yesCurrently running: noFailed balancer rounds in last 5 attempts: 0Migration Results for the last 24 hours: 2: Success databases: {"_ id": "zxl", "primary": "shard3_zxl", "partitioned": true} zxl.shard key: {"age": 1 "name": 1} unique: falsebalancing: truechunks:shard1_zxl1shard2_zxl1shard3_zxl1 {"age": {"$minKey": 1}, "name": {"$minKey": 1}}-- > {"age": 2, "name": "user2"} on: shard1_zxl Timestamp (2,0) {"age": 2, "name": "user2"}-> {"age": 22 "name": "user22"} on: shard2_zxl Timestamp (3,0) {"age": 22, "name": "user22"}-- > {"age": {"$maxKey": 1}, "name": {"$maxKey": 1} on: shard3_zxl Timestamp (3,1)

The above is the completion of mongodb3.2 replication set and shard shard construction. If you have any questions, you can leave me a message and solve it as soon as possible.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report