In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Heartbeat+DRBD+NFS High availability case
9.4 description of requirements for deploying DRBD
9.4.1 Business requirements description
Suppose two servers Rserver-1/Lserver-1, their actual IP are 192.168.236.143 (Rserver) and 192.168.236.192 (Lserver), respectively.
Configuration goal: after the two servers have configured the DRBD service respectively, write data on the Rserver-1 machine / dev/sdb partition, the data will be synchronized to the Lserver-1 machine from time to time, once the server Rserver-1 machine is down or the hard disk is damaged, the data on the Lserver-1 machine is now a full backup of the picdata-1 machine, of course, not just a full backup, It can also instantly replace bad data or data synchronization on broken Rserver-1 machines, so as to achieve the goal of high data availability and no business impact.
9.4.2 DRBD deployment structure diagram
1. Drbd services synchronize data with each other in real time through direct connection or Ethernet.
2. The two storage servers back up each other. Normally, each end provides a primary partition for NFS use.
3. There are dual Gigabit network card bindings (bonding) between storage servers and between storage servers and switches.
4. The application server accesses storage through NFS
9.4.3 Service Host Resource Planning
Name
Interface
IP
Use
Master (Rserver-1)
Eth0
192.168.236.143
Manage IP in public network and forward data by WAN
Eth2
172.16.1.1
Private network management IP for LAN data forwarding
Eth3
192.168.1.1
Used to provide heartbeat line connections (directly connected)
VIP
192.168.236.10
Used to provide application A mount service
BACKUP (Lserver-1)
Eth0
192.168.236.192
Manage IP in public network and forward data by WAN
Eth2
172.16.1.2
Private network management IP for LAN data forwarding
Eth3
192.168.1.2
For heartbeat connections between servers
VIP
192.168.236.20
Used to provide application A mount service
9.4.5 Environment configuration for drbd
Set the hosts file and configure both. Note that here is the host name and needs to be changed to picadata-1-1. The host name needs to be changed.
For example: hostname picadata-1-1 if there is no operation in this step, there will be an error when starting the service.
Echo '172.16.1.1 Rserver-1' > > / etc/hosts
Echo '172.16.1.2 Lserver-1' > > / etc/hosts
[root@Lserver-1] # tail-2 / etc/hosts
172.16.1.2 Rserver-1
172.16.1.1 Lserver-1
8.3.3 configure heartbeat connections between servers:
The two network cards 192.168.1.1 and 192.168.1.2 are directly connected through an ordinary network cable, that is, without going through the switch, the two network cards are directly connected together for heartbeat detection.
Master:
Ifconfig eth3 192.168.1.1 netmask255.255.255.0
Backup:
Ifconfig eth3 192.168.1.2 netmask255.255.255.0
Add the following host routes to the Rserver-1 server
Route add-host 192.168.1.2 dev eth3
# this command is to access 192.168.1.2 from picdata-1-1server and use the network card eth3 as a heartbeat line.
Echo 'route add-host 192.168.1.2 deveth3' > > / etc/rc.local
# #-à is added to the boot configuration so that the routing configuration will be loaded automatically after the next boot.
Route-n
Add the following host routes to the Lserver-1 server
Route add-host 192.168.1.2 dev eth3
# this command is to access 192.168.1.2 from picdata-1-2server and use the network card eth3 as a heartbeat line.
Echo 'route add-host 192.168.1.1 deveth3' > > / etc/rc.local
# #-à is added to the boot configuration so that the routing configuration will be loaded automatically after the next boot.
9.5 start deployment
9.5.1 Partition of hard disk
First of all, partition the hard disk through commands such as fdisk,mkfs,ext3,tune2fs, and the partition information is as follows
Tip: if a single hard disk and raid's hard disk in a production environment are larger than the 2Tfdisk command, you cannot see it.
Add two hard drives to the virtual machine. Check the back.
Rserver-1 View
[root@Rserver-1 ~] # fdisk-l
Disk / dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000486f5
DeviceBoot Start End Blocks Id System
/ dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/ dev/sda2 64 2611 20458496 8e Linux LVM
Disk / dev/sdb: 21.5 GB, 21474836480 bytes
Lserver-1 View
[root@Lserver-1 ~] # fdisk-l
Disk / dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00087dae
DeviceBoot Start End Blocks Id System
/ dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/ dev/sda2 64 2611 20458496 8e Linux LVM
Disk / dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
9.5.2 Partition operation in master and backup (Note: both are the same)
Therefore, all we need to do is partition / dev/sdb. The details of the partition are shown in the table below.
Device
Mount point
Storage size
Action
/ dev/sdb1
/ data
500M
Store pictures
/ dev/sdb2
Meta data partition
300M
Store DRBD synchronization status information
[root@Lserver-1 ~] # fdisk / dev/sdb
Device contains neither a valid DOSpartition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with diskidentifier 0x95767900.
Changes will remain in memory only,until you decide to write them.
After that, of course, the previouscontent won't be recoverable.
Warning: invalid flag 0x0000 of partitiontable 4 will be corrected by w (rite)
WARNING: DOS-compatible mode isdeprecated. It's strongly recommended to
Switch off the mode (command 'c') and change display units to
Sectors (command'u').
Command (m for help): n
Command action
E extended
P primary partition (1-4)
P # create a new primary partition
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, + cylinders or+size {K Magi Mpeng} (1-2610, default 2610): + 500m # size is 500m
Command (m for help): P
Disk / dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512bytes / 512bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/ dev/sdb1 1 65 522081 83 Linux
Command (m for help): n
Command action
E extended
P primary partition (1-4)
P
Partition number (1-4): 2
First cylinder (66-2610, default 66):
Using default value 66
Last cylinder, + cylinders or+size {K Magi Mpeng G} (66-2610, default 2610): + 200m # create a new 200m
Command (m for help): P
Disk / dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512bytes / 512bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/ dev/sdb1 1 65 522081 83 Linux
/ dev/sdb2 66 91 208845 83 Linux
Command (m for help): W # means save
If prompted
The kernel still uses the old table
The new table will be used at next reboot
The above sentence means that the kernel does not know that you have done a partition and needs to restart to let the kernel know. You can let the kernel know with the following command
Partprobe
Now look at the results of the partition
[root@Lserver-1 ~] # fdisk-l
Disk / dev/sda: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512bytes / 512bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00087dae
Device Boot Start End Blocks Id System
/ dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinderboundary.
/ dev/sda2 64 2611 20458496 8e Linux LVM
Disk / dev/sdb: 21.5 GB, 21474836480bytes
255 heads, 63 sectors/track, 2610cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512bytes / 512bytes
I size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x95767900
Device Boot Start End Blocks Id System
/ dev/sdb1 1 65 522081 83 Linux
/ dev/sdb2 66 91 208845 83 Linux
Now format the data partition
[root@Rserver-1~] # mkfs.ext4 / dev/sdb1
[root@Lserver-1~] # mkfs.ext4 / dev/sdb1
[root@Rserver-1~] # tune2fs-c-1 / dev/sdb1
Tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to-1 # set the maximum number of mounts to-1
[root@Lserver-1~] # tune2fs-c-1 / dev/sdb1
Tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to-1 # set the maximum number of mounts to-1
9.6. Preparation before installation: (Rserver-1,Lserver-1)
1. Close iptables and SELINUX to avoid errors during installation.
# service iptables stop
# chkconfig iptables off
# setenforce 0
# vi / etc/selinux/config
-
SELINUX=disabled
-
9.6.1 time synchronization:
Ntpdate-u asia.pool.ntp.org
9.6.2 installation configuration of DRBD:
The installation packages of # yum install gcc gcc-c++ make glibcflex kernel-develkernel-headers must be the same as the uname-r version. Otherwise, drbd cannot be added to the kernel later. It can be installed as a local yum.
9.6.3 install DRBD: (Rserver-1 master, Lserver-1 backup)
# wget http://oss.linbit.com/drbd/8.4/drbd-8.4.2.tar.gz
# tar zxvf drbd-8.4.3.tar.gz
# cd drbd-8.4.3
#. / configure-prefix=/usr/local/drbd--with-km-with-heartbeat-sysconfdir=/etc/
# make KDIR=/usr/src/kernels/2.6.32-504.16.2.el6.x86_64/
# make install
# mkdir-p / usr/local/drbd/var/run/drbd
# chkconfig-add drbd
# chkconfig drbd on
2. Load DRBD module: (Rserver-1 master, Lserver-1 slave)
# modprobe drbd
Check to see if the DRBD module is loaded into the kernel:
# lsmod | grep drbd
Drbd 310172 4
Libcrc32c 1246 1 drbd
3. Parameter configuration: (Rserver-1 master, Lserver-1 slave)
Vi / etc/drbd.conf
Clear the contents of the file and add the following configuration:
Resource r0 {
Protocol C
Startup {wfc-timeout 0thru Degrmurwfcriptimeout 120;}
Disk {on-io-error detach;}
Net {
Timeout 60
Connect-int 10
Ping-int 10
Max-buffers 2048
Max-epoch-size 2048
}
Syncer {rate 200m;}
On Rserver-1 {# on is followed by a hostname
Device / dev/drbd0; # specifies a drbd and a disk
Disk / dev/sdb1; # local disk. It is the hard disk partitioned above.
Address 172.16.1.1 7788; # Private network IP
Meta-disk internal
}
On Lserver-1 {
Device / dev/drbd0
Disk / dev/sdb1
Address 172.16.1.2:7788
Meta-disk internal
}
}
Note: please modify the hostname, IP, and disk in the above configuration for your own specific configuration.
4. Create DRBD device and activate R0 resource: (Rserver-1 master, Lserver-1 slave)
# mknod / dev/drbd0 b 147 0
# drbdadm create-md r0
Wait a moment. Displaying success indicates that the drbd block was created successfully.
Writing meta data...
Initializing activity log
NOT initializing bitmap
New drbd meta data block successfullycreated.
-- = Creating metadata = =--
As with nodes, we count the total numberof devices mirrored by DRBD
At http://usage.drbd.org.
The counter works anonymously. Itcreates a random number to identify
The device and sends that random number,along with the kernel and
DRBD version, to usage.drbd.org.
Http://usage.drbd.org/cgi-bin/insert_usage.pl?
Nu=716310175600466686&ru=15741444353112217792&rs=1085704704
* If you wish to opt out entirely,simply enter 'no'.
* To continue, just press [RETURN]
Success
Enter the command again:
# drbdadm create-md r0
Successfully activate R0
[need to type 'yes' to confirm] yes
Writing meta data...
Initializing activity log
NOT initializing bitmap
New drbd meta data block successfullycreated.
5. Start the DRBD service: (Rserver-1 master, Lserver-1 slave)
Servicedrbd start
Note: master-slave joint activation is required to take effect.
6. View status: (Rserver-1 master, Lserver-1 slave)
# service drbd status
Drbd driver loaded OK; device status:
Version: 8.4.3 (api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by root@drbd1.corp.com,2015-05-12 21:05:41
M:res cs ro ds p mounted fstype
0:r0 Connected Secondary/Secondary Inconsistent/Inconsistent C
Here, ro:Secondary/Secondary means that the status of both hosts is standby, while ds is disk status, and the status content displayed is "Inconsistent inconsistent". This is because DRBD cannot determine which party is the host and which disk data should be used as the standard.
7. Configure the drbd1.example.com host as the primary node (Rserver-1)
# drbdsetup / dev/drbd0 primary-- force
8. Check the master / slave DRBD status: (Rserver-1) Master
# service drbd status
Drbd driver loaded OK; device status:
Version: 8.4.3 (api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by root@Rserver-1, 2017-05-1813 40 26
M:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C
(Lserver-1) preparation
# service drbd status
Drbd driver loaded OK; device status:
Version: 8.4.3 (api:1/proto:86-101)
GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515 build by root@Lserver-1, 2017-05-1813
M:res cs ro ds p mounted fstype
0:r0 Connected Secondary/Primary UpToDate/UpToDate C
Ro displays Primary/Secondary and Secondary/Primary on master and slave servers, respectively.
Ds display UpToDate/UpToDate
Indicates that the master-slave configuration is successful.
9. Mount DRBD: (Rserver-1) Master
We can see from the previous state that the mounted and fstype parameters are empty, so we begin to mount DRBD to the system directory / store in this step.
# mkfs.ext4 / dev/drbd0
# mkdir / data
# mount / dev/drbd0 / data
Note: no operation on the DRBD device is allowed on the Secondary node, including mounting; all read and write operations can only be performed on the Primary node. Only when the Primary node hangs up, the Secondary node can be promoted to a Primary node, and the DRBD can be automatically mounted to continue to work.
DRBD status after successful mount: (Rserver-1 master)
[root@Rserver-1~] # service drbd status
Drbddriver loaded OK; device status:
Version:8.4.2 (api:1/proto:86-101)
GIT-hash:7ad5f850d711223713d6dcadc3dd48860321070c build by root@Rserver-1, 2017-05-1813 40 26
M:res cs ro ds p mounted fstype
0:r0 Connected Primary/Secondary UpToDate/UpToDate C / data ext4
9.7 configure the heartbeat service
Yum install heartbeat-y
9.7.1 configuring ha.cf
Cd/usr/share/doc/heartbeat-3.0.4
Ll | grep ha.cfauthkeys haresources
8.4.2.1 configure the ha.cf file
Debugfile/var/log/ha-debug
Logfile/var/log/ha-log
Logfacility local0
#-à the configuration of the above three behavior logs generally does not need to be changed when you configure it.
Keepalive 2
Deadtime 30
Warntime 10
Initdead 120
#-à some basic parameters of the above four behaviors, which generally do not need to be changed when you configure
# serial serialportname...
Mcast eth3225.0.0.219 694 1 0
# #-à this line indicates that the only Nic that needs to be changed from eth3 to your heartbeat cable is to use multicast.
Auto_failbackon
Node Rserver-1 # #-à two hostnames for storing server
Node Lserver-1 # #-à two hostnames for storing server
Crm no
9.7.2 configuring authkeys
Auth 3
# 1 crc
# 2 sha1 HI!
3 md5 Hello!
The authkey file must have 600 permissions. The need to configure 600th permissions has been stated in the Authkey file
# Authentication file. Must be mode 600
9.7.3 configuring haresources
Add a line of files
Rserver-1IPaddr::172.16.1.10/24/eth2 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext4 killnfsd
Note: the path of IPaddr,Filesystem and other scripts in this file is under / etc/ha.d/resource.d/, and you can also store service startup scripts (for example: mysql,www) in this directory, add the same script name to the content of / etc/ha.d/haresources, and start the script following heartbeat startup.
IPaddr::192.168.0.190/24/eth0: configure floating virtual IP for external service with IPaddr script
Drbddisk::r0: using drbddisk script to mount and unmount the resource group of DRBD master-slave nodes
Filesystem::/dev/drbd0::/store::ext4: disk mount and unmount with Filesystem script
Killnfsd this script started to control nfs
9.7.4. Edit the script file killnfsd to restart the NFS service: (Rserver-1,Lserver-1)
# vi/etc/ha.d/resource.d/killnfsd
Killall-9nfsd; / etc/init.d/nfs restart;exit 0
Give 755 execution permissions:
# chmod 755/etc/ha.d/resource.d/killnfsd
9.7.5. Start the HeartBeat service
Start the HeartBeat service on both nodes, starting (Rserver-1): (Rserver-1,Lserver-1)
# serviceheartbeat start
# chkconfigheartbeat on
Now you can ping virtual IP 172.16.1.10 from other machines, indicating that the configuration is successful.
9.7.6. Configure NFS: (Rserver-1,Lserver-1)
Edit the exports configuration file to add the following configuration:
# vi/etc/exports
/ data * (rw,no_root_squash)
9.7.7 restart the NFS service:
# servicerpcbind restart
# service nfsrestart
# chkconfigrpcbind on
# chkconfig nfsoff
Note: NFS boot is set here not to run automatically, because / etc/ha.d/resource.d/killnfsd this script will control the startup of NFS.
9.8. Test high availability
9.8.1. Normal hot backup switch
Mount the NFS shared directory on the client
# mount-t nfs 172.16.1.10:/store/tmp
The simulation stops the heartbeat Rserver-1 primary node service of the primary node
The standby node Lserver-1 will take over seamlessly immediately
The NFS share mounted by the test client reads and writes normally.
The DRBD status of slave (Lserver- 1 slave) at this time:
If the status above becomes primary, it means that the switch has been successful.
9.8.2. Abnormal downtime switching
First of all, switch all the service and IP back to the master. Turn off the main power supply directly in the back.
[root@Rserver-1ha.d] # / etc/init.d/heartbeat start
Starting High-Availabilityservices: INFO: Resource is stopped
Done.
[root@Rserver-1ha.d] # ip addr list
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
Link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00
Inet 127.0.0.1/8 scope host lo
Inet6::1/128 scope host
Valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen1000
Link/ether 00:0c:29:20:dc:da brdff:ff:ff:ff:ff:ff
Inet 192.168.236.143/24 brd 192.168.236.255scope global eth0
Inet 192.168.236.10/24 brd 192.168.236.255scope global secondary eth0
Inet6 fe80::20c:29ff:fe20:dcda/64 scopelink
Valid_lft forever preferred_lft forever
3: eth2: mtu 1500 qdisc pfifo_fast state UP qlen1000
Link/ether 00:0c:29:20:dc:e4 brdff:ff:ff:ff:ff:ff
Inet 172.16.1.1/24 brd 172.16.1.255 scopeglobal eth2
Inet 172.16.1.10/24 brd 172.16.1.255 scopeglobal secondary eth2
Inet6 fe80::20c:29ff:fe20:dce4/64 scopelink
Valid_lft forever preferred_lft forever
4: eth3: mtu 1500 qdisc pfifo_fast state UP qlen1000
Link/ether 00:0c:29:20:dc:ee brdff:ff:ff:ff:ff:ff
Inet 192.168.1.1/24 brd 192.168.1.255 scopeglobal eth3
Inet6 fe80::20c:29ff:fe20:dcee/64 scopelink
Valid_lft forever preferred_lft forever
5: pan0: mtu 1500 qdisc noop state DOWN
Link/ether 6e:5d:75:f7:48:77 brdff:ff:ff:ff:ff:ff
[root@Rserver-1ha.d] # df
Filesystem 1K-blocks Used Available Use% Mounted on
/ dev/mapper/vg_rserver1-lv_root 18650424 4093320 13609700 24% /
Tmpfs 372156 76 372080 / dev/shm
/ dev/sda1 495844 34853 435391 8% / boot
/ dev/sr0 4363088 4363088 0 / media/CentOS_6.5_Final
/ dev/drbd0 505552 10521 468930 3% / data
[root@Rserver-1ha.d] #
The switch has been successful. Now test the direct downtime and see if it can be converted.
The power has been turned off. Let's take a look at the situation.
[root@Lserver-1ha.d] # ip addr list
1:lo: mtu 16436 qdisc noqueue state UNKNOWN
Link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00
Inet 127.0.0.1/8 scope host lo
Inet6:: 1/128 scope host
Valid_lft forever preferred_lft forever
2:eth0: mtu 1500 qdisc pfifo_fast stateUP qlen 1000
Link/ether 00:0c:29:4d:f6:92 brdff:ff:ff:ff:ff:ff
Inet 192.168.236.192/24 brd 192.168.236.255scope global eth0
Inet6 fe80::20c:29ff:fe4d:f692/64 scopelink
Valid_lft forever preferred_lft forever
3:eth2: mtu 1500 qdisc pfifo_fast stateUP qlen 1000
Link/ether 00:0c:29:4d:f6:9c brdff:ff:ff:ff:ff:ff
Inet 172.16.1.2/24 brd 172.16.1.255 scopeglobal eth2
Inet 172.16.1.10/24 brd 172.16.1.255 scopeglobal secondary eth2
Inet6 fe80::20c:29ff:fe4d:f69c/64 scopelink
Valid_lft forever preferred_lft forever
4:eth3: mtu 1500 qdisc pfifo_fast stateUP qlen 1000
Link/ether 00:0c:29:4d:f6:a6 brdff:ff:ff:ff:ff:ff
Inet 192.168.1.2/24 brd 192.168.1.255 scopeglobal eth3
Inet6 fe80::20c:29ff:fe4d:f6a6/64 scopelink
Valid_lft forever preferred_lft forever
5:pan0: mtu 1500 qdisc noop state DOWN
Link/ether 92:be:67:20:6e:b6 brdff:ff:ff:ff:ff:ff
[root@Lserver-1ha.d] # df
Filesystem 1K-blocks Used Available Use% Mounted on
/ dev/mapper/vg_lserver1-lv_root 18650424 3966516 13736504 23% /
Tmpfs 372156 224 371932 / dev/shm
/ dev/sda1 495844 34856 435388 8% / boot
/ dev/sr0 4363088 4363088 0 / media/CentOS_6.5_Final
/ dev/drbd0 505552 10521 468930 3% / data
[root@Lserver-1ha.d] #
The client checks.
As shown in the figure above, heartbeat+DRBD+NFS has been built successfully.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 294
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.