In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you how to build a cluster MongoDB, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
Why use the cluster architecture?
Master-slave: failover: cannot be achieved. If the host is down, you need to shut down slave and start in master mode. Can not solve single point of failure can not autofailover can not automatically switch between master and slave
In order to solve the problem of master and slave, the replica set appears after MongoDB3.0, and the replica set solves the problem of failover, but the data in a replica set is the same, so it is impossible to store massive data. So we need an architecture to solve this problem. That is, a fragmented cluster.
It takes ten service processes to build a robust and simple MongoDB cluster (ten servers are needed to build separately), which is built on a virtual machine.
About MongoDB
There are three main ways to build mongodb cluster, master-slave mode, Replica set mode, sharding mode, each has its own advantages and disadvantages, suitable for different occasions, Replica set is the most widely used, master-slave mode is less used, sharding mode is the most complete, but the configuration and maintenance is more complex.
At present, the project to take over uses Replica set, so I mainly understand this model. You can click here for an introduction on the official website.
It is necessary to know the following three types of roles in Replica Set mode:
Primary node [Primary]
Receive all write requests and synchronize changes to all Secondary. A Replica Set can only have one Primary node, and when the Primary dies, other Secondary or Arbiter nodes will re-elect a master node. The default read request is also sent to the Primary node, and the client needs to modify the connection configuration if it needs to be forwarded to Secondary.
Replica node [Secondary]
Maintain the same dataset as the primary node. Participate in the selection of the master when the master node dies.
Arbitrator [Arbiter]
Do not keep the data, do not participate in the election, only conduct the election. Using Arbiter can reduce the hardware requirements for data storage, and Arbiter runs with little significant hardware resource requirements, but it is important that it is not deployed on the same machine as other data nodes in a production environment.
Note that the number of Replica Set nodes for an automatic failover must be odd, so that there must be a majority to make the primary decision when voting.
Set up a cluster
After understanding the basic concepts, I began to try to build a cluster. In order to better understand it, I specially found three test machines for deployment.
Preparation in advance
First, prepare three test machines:
10.100.1.101 Master node (master)
10.100.1.102 standby node (slave)
10.100.1.103 Arbitration Point (arbiter)
Then there is the installation package for mongo (since version 3.4.2 is used online, it remains uniform)
Curl-O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.4.2.tgz
Install mongo
It is installed here under / usr/local/mongodb.
First extract and rename:
Tar-zxvf mongodb-linux-x86_64-3.0.6.tgz mv mongodb-linux-x86_64-3.4.2 / / usr/local/mongodb
Then create several new files under / mongodb:
# storing mongo configuration file mkdir-p conf # storing log file mkdir-p logs # storing data file mkdir-p data
It should be noted here that the file path configured in the configuration file must exist, otherwise there will be an error when starting mongo, and mongo will not be automatically generated when it starts.
Then assign to create the profile:
Master node: mongodb_master.conf
# master.confdbpath=/usr/local/mongodb/datalogpath=/usr/local/mongodb/logs/mongodb.logpidfilepath=/usr/local/mongodb/master.piddirectoryperdb=truelogappend=truereplSet=testdbport=27017oplogSize=100fork=truenoprealloc=true
Backup node: vi mongodb_slave.conf
# slave.confdbpath=/usr/local/mongodb/datalogpath=/usr/local/mongodb/logs/mongodb.logpidfilepath=/usr/local/mongodb/master.piddirectoryperdb=truelogappend=truereplSet=testdbport=27017oplogSize=100fork=truenoprealloc=true
Arbitration point: vi mongodb_arbiter.conf
# arbiter.confdbpath=/usr/local/mongodb/datalogpath=/usr/local/mongodb/logs/mongodb.logpidfilepath=/usr/local/mongodb/master.piddirectoryperdb=truelogappend=truereplSet=testdbport=27017oplogSize=100fork=truenoprealloc=true
In use, it is only the most basic configuration. In actual scenarios, you can configure it according to your business needs. Other parameters are for reference:
-- quiet # quiet output
-- port arg # specifies the service port number, default port 27017
-- bind_ip arg # bind service IP. If binding 127.0.0.1, it can only be accessed locally. Default local IP is not specified.
-- logpath arg # specifies the MongoDB log file. Note that the specified file is not a directory.
-- logappend # writes logs by appending
-- the full path of pidfilepath arg # PID File. If not set, there is no PID file.
-- the full path of the private key of the keyFile arg # cluster, valid only for Replica Set architecture
-- unixSocketPrefix arg # UNIX domain socket replacement directory (default is / tmp)
-- fork # runs MongoDB as a daemon to create a server process
-- auth # enable authentication
-- cpu # periodically displays the CPU utilization and iowait of CPU
-- dbpath arg # specify the database path
-- diaglog arg # diaglog option 0=off 1 "W 2R 3=both 7=W+some reads
-- directoryperdb # sets that each database will be saved in a separate directory
-- journal # enable the logging option, and the data operations of MongoDB will be written to the files in the journal folder
-- journalOptions arg # enable log diagnostics option
-- ipv6 # enable IPv6 option
-- jsonp # allows JSONP access via HTTP (with security implications)
-- maxConns arg # maximum number of simultaneous connections defaults to 2000
-- noauth # does not enable authentication
-- nohttpinterface # shuts down the http interface and port 27018 access is disabled by default
-- noprealloc # disables data file pre-allocation (often affecting performance)
-- noscripting # disable scripting engine
-- notablescan # does not allow table scanning
-- nounixsocket # disable Unix socket snooping
-- nssize arg (= 16) # set the letter database. ns file size (MB)
-- objcheck # checks the validity of customer data after receiving it
-- profile arg # Archive parameters 0=off 1=slow, 2=all
-- quota # limits the number of files per database. The default setting is 8.
-quotaFiles arg # number of files allower per db, requires-quota
-- rest # enable simple rest API
-- repair # repair all database run repair on all dbs
-- repairpath arg # the directory of the files generated by the repair library. The default is the directory name dbpath
-slowms arg (= 100) # value of slow for profile and console log
-- smallfiles # uses smaller default files
-- syncdelay arg (= 60) # seconds in which data is written to disk (0=never, not recommended)
-- sysinfo # print some diagnostic system information
-- upgrade # if you need to upgrade the database
-- fastsync # enables the slave library replication service from a dbpath whose database is a snapshot of the master library and can be used to quickly enable synchronization
-- autoresync # if the synchronization data between the slave library and the master database is much worse, it will be resynchronized automatically.
-- oplogSize arg # sets the size of the oplog (MB)
-- master # main library mode
-- slave # slave library mode
-- source arg # Slave Port number
-- only arg # specifies a single database replication
-- slavedelay arg # sets the delay time for synchronizing the master library from the library
-- replSet arg # sets the replica set name
-- configsvr # declares that this is a cluster config service, default port 27019, default directory / data/configdb
-- shardsvr # declares that this is a cluster shard, default port 27018
-- noMoveParanoia # turn off paranoia for moveChunk data saving
After the node is configured, you can start mongo. Cd to the bin directory:
. / mongod-f / etc/mongodb_master.conf./mongod-f / etc/mongodb_slave.conf./mongod-f / etc/mongodb_arbiter.conf
Configure nod
Finally, you need to configure the primary, standby, and arbitration nodes. First of all, we choose a server to connect:
. / mongo 10.100.1.101 27017 > use admin
Then configure:
Cfg= {_ id: "testdb", members: [{_ id:0,host:'10.100.1.101:27017',priority:2}, {_ id:1,host:'10.100.1.102:27017',priority:1}, {_ id:2,host:'10.100.1.103:27017',arbiterOnly:true}]}; rs.initiate (cfg) # effective configuration
If nothing happens and the configuration takes effect normally, it is almost complete. You can view the relevant information through the rs.status () command.
At this point, you can log in to the database to test the results and see if the master and slave are synchronized with the normal database operation. For the test, there will be no more talk here.
Data backup and restore
After simply building the cluster, we need to migrate the original test environment data, so it involves the backup and restore of mongo.
It is relatively easy to do this through mongodump and mongorestore:
. / bin/mongodump-h 10.100.1.101-d testdb-o. # mongodump-h dbhost-d dbname-o dbdirectory#-h:MongDB address, for example: 127.0.0.1. Of course, you can also specify the port number: 127.0.0.1 mongodump 270server-d: the database instance to be backed up For example: test#-o: backup data storage location. / bin/mongorestore-h 10.100.1.102-d testdb testdb# mongorestore-h-d dbname #-- host,-h: server address where MongoDB is located. Default is: localhost:27017#-- db,-d: database instance to be restored #-- drop: delete the current data first, and then restore the backup data #: the last parameter of mongorestore Set the backup data location #-- dir: specify the backup directory, you cannot specify both the and-- dir options. These are all the contents of this article entitled "how to build clusters in MongoDB". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.