In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1.DRBD introduction
Distributed Replicated Block Device (DRBD) is a software based on block devices synchronizing and mirroring data between different highly available server pairs, which enables real-time or asynchronous mirroring or synchronous replication between two servers in a network based on block device level, similar to system architecture project software such as rsync+inotify. It's just that drbd is based on synchronization at the bottom of the file system, that is, the block level, while rsync+inotify is the actual physical file synchronization on top of the file system. So dbrd is more efficient.
Block devices can be disk partitions, LVM logical volumes, or whole disks.
How 2.DRBD works
DRBD is a distributed storage system in the storage layer of linux's kernel, and DRBD can be used to share block devices, file systems, and data between two Linux servers. Similar to the function of a network RAID-1, on two highly available (HA) server hosts based on DBRD, when we write data to the local disk system, the data will be sent to another host in the network in real time and recorded in another disk system in the same form, so that the data of the local (master node) and the remote host (standby node) keep real-time data synchronization. At this time, if the local system (primary node) fails, then the remote host (standby node) will retain the same data backup as the primary node to continue to use, not only the data will not be lost, but also enhance the experience of users who access the data. For more details, please visit dbrd's official website http://www.dbrd.org/.
The schematic diagram of drbd operation is shown in the following figure:
Replication mode of 3.DRBD
Protocol A:
Asynchronous replication protocol. Once the local disk write has been completed and the packet is in the send queue, the write is considered complete. When a node fails, data loss may occur because the data written to the remote node may still be in the sending queue. Although the data on the failover node is consistent, it is not updated in a timely manner.
Protocol B:
Memory synchronous (semi-synchronous) replication protocol. Once the local disk write has been completed and the replication packet reaches the peer node, the write on the primary node is considered complete. Data loss may occur when both participating nodes fail at the same time, because the data in transit may not be committed to disk.
Protocol C:
Synchronous replication protocol. A write is considered complete only if the disk of the local and remote node has confirmed that the write operation is complete. There is no data loss, so this is a popular mode for cluster nodes, but IO throughput depends on network bandwidth.
Protocol C is generally used, but the choice of C protocol will affect the traffic and thus the network delay. For the sake of data reliability, we should carefully choose which protocol to use in the production environment.
Enterprise Application scenario of 4.DBRD
Drbd is often used in production scenarios based on data synchronization solutions between highly available servers.
For example: heartbeat+drbd+nfs/mfs/gfs, heartbeat+drbd+mysql/oracle, etc. In fact, drbd can cooperate with any application scenario of all services that require data synchronization.
5. Common data synchronization tools
(1) rsync (sersync,inotify,lsyncd)
(2) scp
(3) nc
(4) nfs (Network File system)
(5) union synchronization
(6) csync2 multi-computer synchronization
(7) the software's own synchronization mechanism (mysql,oracle,mongdb,ttserver,redis..) files are put into the database, synchronized to the slave library, and then taken out.
(8) Drbd
6. Deployment Drbd service requirements description 6.1 Business requirements description
Business requirements can be combined with the previously configured heartbeat to build dbrd services, and I have written about the installation and deployment of hearbeat. The master server is heartrbeat-1-114and the slave server is heartbeat-1-115.
6.2 DRBD deployment structure diagram
(1) Drbd services synchronize data with each other in real time through direct connection or Ethernet.
(2) the two storage servers back up each other. Normally, each end provides a primary partition for NFS.
(3) there are double gigabit network cards binding between storage servers, storage services and switches.
(4) the application server accesses storage through NFS.
The 7.DRBD software installation lab prepares the 7.1 operating system:
CentOS-6.8-x86_64
7.2 DRBD service host resource preparation
Primary server A:
Hostname: heartbeat-1-114
Eth0 Network Card address: 192.168.136.114 (Management IP)
Eth2 Network Card address: 10.0.10.4Universe 255.255.255.0 (heartbeat IP)
From server B:
Hostname: heartbeat-1-115
Eth0 Network Card address: 192.168.136.115 (Management IP)
Eth2 Network Card address: 10.0.10.5Universe 255.255.255.0 (heartbeat IP)
Virtual VIP:
Virtual VIP on the primary server heartbeat-1-114, VIP:192.168.136.116
The preparations for changing the hostname, shutting down the firewall and selinux are the same as heartbeat, which I mentioned in my previous article on installing heartbeat, so I won't talk about it here. Use the two machines on which heartbeat is installed, the master server heartbeat-1-114and the slave server heartbeat-1-115.
7.3 create available partitions
Drbd is partition-based, and you can't do without available partitions. We first shut down the two virtual machines heartbeat-1-114and heartbeat-1-1115, then add a 1G hard disk to the master node heartbeat-1-115and a 2G hard disk to the slave node heartbeat-1-115. Add a hard drive and don't demonstrate it, and then start the two machines.
7.4 Partition of / dev/sdb
(1) Partition the master node / dev/sdb
[root@heartbeat-1-114html] # fdisk-l | grep "/ dev/sdb" Disk / dev/sdb: 1073 MB, 1073741824 bytes [root @ heartbeat-1-114html] # fdisk / dev/sdbWARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u') .Command (m for help): pDisk / dev/sdb: 1073 MB, 1073741824 bytes255 heads, 63 sectors/track 130.130 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000bb201 Device Boot Start End Blocks Id SystemCommand (m for help): nCommand action e extended p primary partition (1-4) pPartition number (1-4): 1First cylinder (1,130, default 1): Using default value 1Last cylinder, + cylinders or + size Default: + 768MCommand (m for help): NCommand action e extended p primary partition (1-4) pPartition number (1-4): 2First cylinder (100130, default 100): Using default value 100Last cylinder, + cylinders or + size {K default G}: Using default value 130Command (m for help): pDisk / dev/sdb: 1073 MB, 1073741824 bytes255 heads, 63 sectors/track 130 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000bb201 Device Boot Start End Blocks Id System/dev/sdb1 1 99 795186 83 Linux/dev/sdb2 100 130 249007 + 83 LinuxCommand (m for help): wThe partition table has been altered calling ioctl () to re-read partition table.WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table. The new table will be used atthe next reboot or after you run partprobe (8) or kpartx (8) Syncing disks. [root @ heartbeat-1-114 html] # partprobe Warning: WARNING: the kernel failed to re-read the partition table on / dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.Warning: WARNING: the kernel failed to re-read the partition table on / dev/sdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.Warning: Unable to open / dev/sr0 read-write (Read-only file system). / dev/sr0 has been opened read-only.Warning: Unable to open / dev/sr0 read-write (Read-only file system). / dev/sr0 has been opened read-only.Error: Invalid partition table-recursive partition on / dev/ sr0.[ root @ heartbeat-1-114html] # fdisk-l | grep "/ dev/sdb" Disk / dev/sdb: 1073 MB, 1073741824 bytes/dev/sdb1 1 99 795186 83 Linux/dev/sdb2 100130 249007 + 83 Linux
(2) Partition the slave node / dev/sdb
[root@heartbeat-1-115etc] # fdisk-l | grep "/ dev/sdb" Disk / dev/sdb: 2147 MB, 2147483648 bytes [root @ heartbeat-1-115etc] # fdisk / dev/sdbWARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u') .Command (m for help): pDisk / dev/sdb: 2147 MB, 2147483648 bytes255 heads, 63 sectors/track 261 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x0003d93c Device Boot Start End Blocks Id SystemCommand (m for help): nCommand action e extended p primary partition (1-4) pPartition number (1-4): 1First cylinder (1-261, default 1): Using default value 1Last cylinder, + cylinders or + size {Ke Magnum G} Default 261): + 1536MCommand (m for help): n Command action e extended p primary partition (1-4) pPartition number (1-4): 2First cylinder (198,261, default 198): Using default value 198Last cylinder, + cylinders or + size {K default G} (198261, default 261): Using default value 261Command (m for help): wThe partition table has been altered calling ioctl () to re-read partition table.WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table. The new table will be used atthe next reboot or after you run partprobe (8) or kpartx (8) Syncing disks. [root @ heartbeat-1-115 etc] # partprobeWarning: WARNING: the kernel failed to re-read the partition table on / dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.Warning: WARNING: the kernel failed to re-read the partition table on / dev/sdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.Warning: Unable to open / dev/sr0 read-write (Read-only file system). / dev/sr0 has been opened read-only.Warning: Unable to open / dev/sr0 read-write (Read-only file system). / dev/sr0 has been opened read-only.Error: Invalid partition table-recursive partition on / dev/ sr0.[ root @ heartbeat-1-115etc] # fdisk-l | grep "/ dev/sdb" Disk / dev/sdb: 2147 MB, 2147483648 bytes/dev/sdb1 1 197 1582371 83 Linux/dev/sdb2 198261 514080 83 Linux
Therefore, all we need to do is partition / dev/sdb. The details of the partition are shown in the following figure:
Tip:
1. The meta data partition here must not be formatted to build a file system.
two。 Partitioned partitions cannot now be directly attached to (mount).
3. The DRBD meta data partition of the production environment can generally be set to 1-2G. The expected size of the / dev/sdb2 partition here 1G should be the sum of the sdb2 partition sizes of the primary node and the standby node (305M+611M=916M). The size of this demonstration is 1G.
8. Install DRBD softwar
The DRDB software can be compiled and installed or the included source yum installation can be downloaded. This event is compiled and installed.
8.1 compile and install DRDB software (note that both machines need to operate in the following steps)
(1) download drbd software (both machines need to be operated)
You can download http://oss.linbit.com/drbd/ on the official website and send it to the server with the rz command
(2) install gcc and gcc-c++
[root@heartbeat-1-114tools] # yum install gcc gcc-c++-y
(3) compile drbd
[root@heartbeat-1-114 tools] # pwd/home/linzhongniao/ tools [root @ heartbeat-1-114 tools] # export LC_ALL= C [root @ heartbeat-1-114 tools] # lsdrbd-8.4.4.tar.gz [root@heartbeat-1-114 tools] # tar-xf drbd-8.4.4.tar.gz [root@heartbeat-1-114 tools] # cd drbd-8.4.4 [root@heartbeat-1-114 drbd-8.4.4] #. / configure -prefix=/usr/local/drbd8.4.4-- with-km-- with-heartbeat-- sysconfdir=/etc/
The following problems occur when yum installs dpkg, dpkg-dev, dpkg-devel and then compiles
Checking for udevinfo... Falseconfigure: WARNING: No dpkg-buildpackage found, building Debian packages is disabled. The following problem occurs when yum installs flex and recompiles configure: error: Cannot build utils without flex, either install flex or pass the-- without-utils option.
In order to prevent compilation errors, it is best to install them in advance. Here are the packages I need to install when compiling.
Yum install dpkg dpkg-dev dpkg-devel gcc gcc-c++ git rpm-build kernel-devel kernel-headers flex-y
(4) load kernel
1. First find the kernel source code
[root@heartbeat-1-114drbd-8.4.4] # ls-ld / usr/src/kernels/$ (uname-r) / ls: unable to access / usr/src/kernels/2.6.32-642.el6.x86room64max: there is no such file or directory
There is no kernel source file path yum installation kernel-devel kernel-headers can be found in the view
[root@heartbeat-1-114drbd-8.4.4] # ls-ld / usr/src/kernels/$ (uname-r) / drwxr-xr-x 22 root root 4096 March 5 05:55 / usr/src/kernels/2.6.32-696.20.1.el6.x86_64/
What if the system kernel shown by the uname-r command is different from the system kernel found under / usr/src/kernels/? it's easy to upgrade the system kernel, restart the system and then check the kernel.
[root@heartbeat-1-114drbd-8.4.4] # ls-ld / usr/src/kernels/2.6.32-696.20.1.el6.x86_64/drwxr-xr-x 22 root root 4096 Mar 6 19:24 / usr/src/kernels/2.6.32-696.20.1.el6.x86_64/ [root @ heartbeat-1-114drbd-8.4.4] # uname-r 2.6.32-642.el6.x86_64 [root@ Heartbeat-1-114drbd-8.4.4] # yum-y install Kernel [root @ heartbeat-1-114i] # uname-r 2.6.32-696.20.1.el6.x86_64
two。 Load the system kernel
[root@heartbeat-1-114 drbd-8.4.4] # make KDIR=/usr/src/kernels/$ (uname-r) / [root@heartbeat-1-114 drbd-8.4.4] # echo $? 0
(5) install drbd
[root@heartbeat-1-114drbd-8.4.4] # make install
Echo $? Successfully installed for zero
9. Image data / data configuration parameters
What is more important is the yellow section below, we also use the environment where heartbeat is deployed, and the environment of heartbeat is deployed in the previous article, so it will not be demonstrated here.
10. Configure the DRBD parameter (both machines operate) 10.1 load the DRBD module into the kernel
This DRBD module will fail after restarting the computer and will not be automatically loaded into the system kernel. We can put it into effect in / etc/rc.local to make it boot. Production does not need to be put in / etc/rc.local, do not allow automatic start, automatic start will lead to some unnecessary problems. Use lsmod | grep drbd to view this content, which means that the kernel is loaded.
[root@heartbeat-1-114 drbd-8.4.4] # lsmod | grep drbd [root @ heartbeat-1-114 drbd-8.4.4] # momodinfo modutil mountmount.nfsmountpoint mount.tmpfs modprobe more mount.cifs mount.nfs4 mountstats [root@heartbeat-1-114 drbd-8.4.4] # modprobe drbd [root @ heartbeat-1-114 drbd-8.4.4] # lsmod | grep drbddrbd 327370 0 libcrc32c 1246 1 drbd [root @ heartbeat-1-114 drbd-8.4.4] # echo 'modprobe drbd' > > / etc/ rc.local [root @ heartbeat-1-114 drbd-8.4.4] # tail-1 / etc/rc.localmodprobe drbd10.2 compiles DRBD's configuration file drbd.conf
(1) configure the configuration file of DRBD
The configuration file for DRBD is under the path / etc/ that we specified when compiling.
[root@heartbeat-1-114etc] # pwd/ et c [root @ heartbeat-1-114ETC] # cp drbd.conf {, .bak} [root@heartbeat-1-114etc] # rm-f drbd.confession [root @ heartbeat-1-114etc] # cat drbd.confglobal {usage-count no;} common {syncer {# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sbrate 1000Mverifymurf Alg crc32c;}} resource data {protocol Citdisk {on-io-error detach } on heartbeat-1-114 {device/dev/drbd0;disk / dev/sdb1;address 10.0.10.4 dev/sdb1;address 7788 Maximi disk / dev/sdb2 [0];} on heartbeat-1-115 {device/dev/drbd0;disk / dev/sdb1;address 10.0.10.5 on heartbeat-1 / dev/sdb2 [0];}}
(2) description of configuration file parameters
Global {usage-count no;}
The first three lines are your global configuration. Most websites will synchronize the installation of open source sites. The value of usage-count is equal to no, but official statistics are not allowed.
Common {syncer {# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb rate 1000M; verify-alg crc32c;}}
The speed of synchronization is set in the Common section, for example, 1000m _ cc32c is an algorithm.
Resource data {protocol C; disk {on-io-error detach;} on heartbeat-1-114 {device / dev/drbd0; disk / dev/sdb1; address 10.0.10.4 dev/drbd0 7788; meta-disk / dev/sdb2 [0];} on heartbeat-1-115 {device / dev/drbd0 Disk / dev/sdb1; address 10.0.10.5 meta-disk 7788; meta-disk / dev/sdb2 [0];}}
The above resource section is the resource of drbd, and protocol C is a real-time data synchronization protocol. Using An or B is asynchronous synchronization or semi-synchronization, which will lead to data loss, unless the business requirement data does not always require high concurrency. Disk indicates how to handle an io error on a disk. The data after resource is the resource of drbd that will be used when starting drbd. Note that resource can have multiple resources. For example, if you want to add another resource, we can copy one of the resource segments. You need to modify the resource name after resource and the port number of disk, meta-disk and synchronized address, such as 7788. Here the on of on heartbeat-1-114is followed by the machine name. Note that the machine name must be the result returned by uname-n. Device represents the device of drbd, disk represents the first partition corresponding to local / dev/sdb, the address here of address is a synchronous address, meta-disk is the second partition of local / dev/sdb corresponding to the data partition of meta device, and 0 is a format of meta device.
11. Enable DRBD resources (both machines need to operate)
Both machines need to be operated. Take heartbeat-1-114as an example.
11.1 initialize metadata (Create device metadata) for DRBD
Initialize the resource, and notice that our initial resource is the data after resource in drbd.conf.
[root@heartbeat-1-114etc] # drbdadm create-md dataWriting meta data...initializing activity logNOT initializing bitmapNew drbd meta data block successfully created.11.2 starts the DRBD service
Command to start the drbd service: drbdadm up data
Command to stop the drbd service: drbdadm down data
Drbdadm up is followed by the resource data set by resource, or you can specify all resource drbdadm up all. The preparation node needs to be started. Take the primary node as an example:
[root@heartbeat-1-114etc] # drbdadm up data/usr/local/drbd8.4.4/var/run/drbd: No such file or directory/usr/local/drbd8.4.4/var/run/drbd: No such file or directory
We think there is an error message / usr/local/drbd8.4.4/var/run/drbd: No such file or directory
You can't find this directory above the centos6 version, so let's create one and start drbd.
[root@heartbeat-1-11414] # mkdir-p / usr/local/drbd8.4.4/var/run/ drbd [root @ heartbeat-1-114OBD] # drbdadm create-md dataDevice'0' is qualified command 'drbdmeta 0 v08 / dev/sdb2 0 create-md' terminated with exit code 211.3 you can view the status of drbd at / proc/drbd [root@heartbeat-1-114etc] # cat / proc/drbdversion: 8.4.4 (api:1/proto:86-101l) GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by root@heartbeat-1-114 2018-11-10 23:08:21 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:749984
This shows that ro:Secondary/Secondary ds:Inconsistent/Inconsistent is correct and subordinate, and is in a state of primary.
11.4 specify the primary node for data synchronization
Command to specify the master node of data synchronization: drbdadm-overwrite-data-of-peer primary data
Command to change a master node to a slave node (if the server is already a master node): / usr/share/heartbeat/hb_standby
Description:
1. If the hard drive is empty. You can perform operations at will without considering the data.
two。 If the data on both sides is different (pay special attention to the direction of the synchronized data, otherwise the data may be lost) if there is data on the peer disk, it should be backed up in advance, otherwise the peer data will be overwritten.
Note: the primary server that operates on our primary server is heartbeat-1-114. synchronize DRBD data to the peer SERVER to keep the data consistent
(1) specify the master node to synchronize data
[root@heartbeat-1-114etc] # drbdadm-overwrite-data-of-peer primary data
View the synchronization information of the primary node
[root@heartbeat-1-114etc] # cat / proc/drbdversion: 8.4.4 (api:1/proto:86-101l) GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by root@heartbeat-1-114C 2018-11-10 23:08:21 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-ns:40960 nr:0 dw:0 dr:41631 al:0 bm:2 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:709024 [>.] Sync'ed: 6.0% (709024amp 749984) Kfinish: 0:00:17 speed: 40960 (40960) K / sec[ heartbeat-1 @ heartbeat-1-114Root] # cat / proc/drbdversion: 8.4.4 (api:1/proto:86-101l) GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by root@heartbeat-1-114For 2018-11-10 23:08:21 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-ns:164556 nr:0 dw:0 dr:168607 al:0 bm:10 lo:0 pe:1 ua:4 ap:0 ep:1 wo:f oos:586144 [= >] Sync'ed: 22.3% (586144 etc 749984) Kfinish: 0:00:14 speed: 40960 (40960) K / sec[ root @ heartbeat-1-114Root] # cat / proc/drbdversion: 8.4.4 (api:1/proto:86-101l) GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by root@heartbeat-1-11411 2018-11-10 23:08:21 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-ns:230100 nr:0 dw:0 dr:234143 al:0 bm:14 lo:0 pe:1 ua:4 ap:0 ep:1 wo:f oos:520608 [= >.] Sync'ed: 31.0% (520608 + 749984) Kfinish: 0:00:11 speed: 45872 (45872) K/sec
(2) Information synchronized from the node
[root@heartbeat-1-115etc] # cat / proc/drbdversion: 8.4.4 (api:1/proto:86-101l) GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by root@heartbeat-1-115,2018-11-10 23:08:42 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-ns:0 nr:749983 dw:749983 dr:0 al:0 bm:46 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
(3) Parameter description
Take the primary node as an example
Cs:Connected:Connected is the status of the connection. Monitoring with zabbix mainly depends on Connected. Ro:Primary/Secondary: Primary is the master and Secondary is the slave; that is, the local is master-to-peer and slave. Ds:UpToDate/UpToDate:UpToDate is updated on both sides. Ns (network send): sent over the network. Nr (network receive): received by the network. This point of the standby node should be the same as the ns value of the primary node. Dw (disk write): hard disk write. Write the network data to the hard drive. If the dw and nr of the standby node are the same as the ns of the primary node, the data is fully synchronized at 12. 5%. Possible problems and Solutions cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-
Or
Cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown C r-ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:8444
Solution:
1. Check that the two physical network connections or IP and host routing are correct.
two。 Stop the iptables firewall.
3. It may also be the result of the occurrence of the brain.
The above problems can be solved by the following operations:
(1) operate on the slave node slave:
Drbdadm secondary datadrbdadm disconnect datadrbdadm-discard-my-data connect data-> discard local data for connection
(2) operate on the primary node:
By viewing the cat/proc/drbd status, if it is not a WFConnection status, you need to manually reconnect the resource with the following command
Drbdadm connect data
Then start the drbd resource of the standby node
13. Mount the test database synchronization and view the synchronization status of the standby node
(1) create a DRBD file system
Format the drbd0 of the primary node, and the standby node does not need to be formatted.
[root@heartbeat-1-114etc] # mkfs.ext4-b 4096 / dev/drbd0mke2fs 1.41.12 (17-May-2010) Filesystem label=OS type: LinuxBlock size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks46944 inodes, 187495 blocks9374 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=1929379846 block groups32768 blocks per group, 32768 fragments per group7824 inodes per groupSuperblock backups stored on blocks: 3276898304, 163840Writing inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 37 mounts or180 days, whichever comes first. Use tune2fs-c or-i to override.
(2) mounting equipment
[root@heartbeat-1-114etc] # mount / dev/drbd0 / data
Tip: if the slave node format will report an error, you need to use the drbdadm-overwrite-data-of-peer primary data command to specify the master node, the slave node can not use this command does not need to format drb0.
# mkfs.ext4-b 4096 / dev/drbd0mke2fs 1.41.12 (17-May-2010) mkfs.ext4: wrong media type while trying to determine filesystem size
(3) data synchronization of test backup nodes
First insert data into the data file system of the primary node, and we insert 20 files
[root@heartbeat-1-114data] # touch `root 10` [heartbeat-1-114data] # ls1 10 2 3 4 5 6 7 8 9 lost+found
To check the data synchronization of the slave node, you need to mount the DRBD storage device first. We see that the data has been synchronized.
[root@heartbeat-1-115etc] # mount / dev/sdb1 / mntmount: you must specify the filesystem type [root @ heartbeat-1-115etc] # drbdadm down data Why does mounting sdb1 report an error when starting the slave node's drbd resource? when you stop the slave node's drbd resource, you can mount it to check whether the data is synchronized. In practice, only drbd resources can be started from the node, and there is no need to mount the file system. If you want to check whether the files are synchronized, stop the slave node drbd resources from being mounted and view [root@heartbeat-1-115 etc] # mount / dev/sdb1 / MNT [root @ heartbeat-1-115 etc] # ll / mnt/total 16-rw-r--r-- 1 root root 0 Nov 11 15:26 1-rw-r--r-- 1 root root 0 Nov 11 15:26 10-rw-r--r-- 1 root root 0 Nov 11 15:26 2 -rw-r--r-- 1 root root 0 Nov 11 15:26 3-rw-r--r-- 1 root root 0 Nov 11 15:26 4-rw-r--r-- 1 root root 0 Nov 11 15:26 5-rw-r--r-- 1 root root 0 Nov 11 15:26 6-rw-r--r-- 1 root root 0 Nov 11 15:26 7-rw-r--r-- 1 root root 0 Nov 11 15:26 8-rw-r-- Rmuri-1 root root 0 Nov 11 15:26 9drwx-2 root root 16384 Nov 11 15:20 lost+found
The synchronized data above indicates that the drbd is complete. After that, let's start the drbd service of the slave node.
# umount / mnt/ [root @ heartbeat-1-11515] # drbdadm up data [root @ heartbeat-1-11515] # cat / proc/drbd version: 8.4.4 (api:1/proto:86-1151101x: 74402fecf24da8e5438171ee8c19e28627e1c98a build by root@heartbeat-1-129) 2018-03-05 07:38:05 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-ns:0 nr:56 dw:56 dr:0 al:0 bm:2 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.