In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Combination of high availability and storage
We made fence before, and today we combine high availability with storage.
Test host: 172.25.0.2 172.25.0.3 172.25.0.251 (as data output)
Part one:
# # before doing the experiment, you need to open the luci and check the cluster status:
# # disable the apach of the cluster:
# # check ip and find no vip:
# # check the apache of server2 (that is, to see whether it is normal to enable and disable it):
# # make sure that the apache of the cluster is disabled:
# # I don't know what the meaning of the above is, let's look forward to it.
# # add storage and store it anywhere. Here, the / dev/vol0/demo logical volume of instructor is used as the storage space. Because there is data here in the previous experiment, we empty it:
# # here, scsi is used as the storage type and iscsi protocol is used, so you can query the required software first by using rpm-qa | grep # # or yum list # #
# # install scsi on the data output (that is, the machine for storage), the corresponding service is tgtd, install the iscsi protocol in the cluster node, and the service name is iscsi:
# # Editing configuration files:
# # enable the service:
# # to view the process, there can only be two tgtd, which is correct. It is necessary to close the tgtd every time you edit the configuration file. This is also a common problem.
# # View configuration information:
# # install iscsi on two nodes:
# # default iscsi is self-booting:
# # Discovery target (target), and the following ip is the host ip for storage.
# # Log in to node:
# if you check it with fdisk-l at this time, you will find that there is an extra disk:
# # Uninstall iscsi from the node:
# if you use fdisk-l again, you can't find the shared disk
# after uninstalling, you can delete the cache on this node, otherwise every time you open iscsi, you will automatically log in to this disk:
# rediscover and log in:
# create a partition with a type of 8e logical volume:
# query. The type of sda1 is 8e:
# check the clvmd service, which is a service for managing logical volumes in the cluster, and make sure to enable it:
# # create logical Volume in Node 2
# # you will find that node 3 is synchronized now, which means that clvmd is working, and it is here to manage logical volumes:
# # of course, Node 2 will also be synchronized:
# # View how to create a cluster vg:
# # use the pv you just created as a cluster vg:
# # check the vg we just created:
# # create lv with clustervg, whose name is demo:
# # of course, everything you do will be reflected in node 3.
# # then format the cluster logical volume:
# # then mount to / mnt in node 3, write the home page in / mnt, and the content is www.westos.org. Then uninstall, of course, the home page will be saved to disk, if changed to the http release directory will be accessible through web.
# # then add the logical volume (storage) to the resource:
# # sometimes you will exit lusi after you save due to timeout. Just log in again and fill in the relevant content again.
# # with resources, we need to associate it with http. To do this, add resources under http in the service options, and select webdata:
# # one more file system will be added at this time:
# # We are here to integrate http services for storage, so we need http services. If not, we need to add a resource first, and then add it to papache.
# # check the current cluster status and open the apache of the cluster:
# # of course, you can also open it in the drawing:
# # now apache is open, and resources vip, storage, and http services are included under Apache. Of course, the opening order of these three should be the same, showing vip, then referring to storage, and finally http service. This is why we first stop the previous http service when we add resources to Apache, and then add the http service after adding storage:
# # check whether the cluster manages resources:
# vip was added.
# # http is also opened, and the disk is mounted:
# # the web page can also be accessed:
# # now hand over the Apache to the connection node manually. If you forget the parameters, you can use clusvcadm-- help to view the relevant parameters:
# # use-r-m to convert nodes:
# # Node 2 will add the vip after the conversion is completed, automatically mount the http service, and enable the http service. On the contrary, node 3 will shut down the same service, that is, only one high availability cluster is working at the same time, solving a single point of failure.
The second part:
# # displayvg clustervg:
# # adding 511 to the extended partition of the cluster logical volume here is equivalent to adding 2G, which is more accurate. Now the logical volume will have a size of 4G:
# # after the expansion is completed, it is found that the mounted disk is not actually refreshed, so it needs to be refreshed:
# # df displays the size of the block by default, and df-h shows the real size:
# # deleting Apache from the cluster:
# # Mount the Node 2 logical volume in / mnt and copy the passwd directory into it:
# # Mount on node 3 and append westos to it. However, you can only see your own appends in node 3, but not the previous files:
# # then uninstall both sides. When you mount it again, you can only see the westos appended in node 3 in node 2, but not the previous file:
# # this means that only one node in the cluster can take over storage at a time, which is the reason for the file system, so we use the gfs2 file system, also known as the scalable distributed file system (also known as the global file system). Multiple nodes can be mounted on the same file system at the same time to ensure data synchronization. When it comes to synchronization, we need to know that in the RHCS suite, css handles the synchronization of the file system:
# # check the correctness of the file system of logical volumes.
Fsck (file system check) is used to check and maintain inconsistent file systems. If the system is powered off or there is a problem with the disk, you can use the fsck command to check the file system.
# # reduce the logical volume, or you don't have to do this step:
The resize2fs command is used to increase or shrink the size of an unloaded "ext2/ext3" file system. If the file system is in mount state, it can only be expanded, as long as the kernel supports online resize. Linux kernel 2.6 supports expansion in mount, but it is limited to ext3 file systems.
# # after viewing the size, it has indeed shrunk:
# # of course, this does not shrink the logical volume. Let's do it in another way:
# # delete the logical volume and then create the logical volume:
# # of course, these all explain the expansion, reduction and creation of logical volumes in the cluster. This step can be avoided.
# # now we have a new 2G logical volume:
# # add the difference between bare device and file system:
If the bare device is to the file system, the format is called the file system.
My own understanding of bare equipment. There is no formatted disk partition, the operating system does not manage it, but the application directly manages it, so Icano is more efficient, precisely because it can use bare devices for applications such as databases that read and write frequently, which can greatly improve performance.
A file system is a mechanism for organizing data and metadata on a storage device. The job of the file system is to maintain the storage of files on disk and record which sectors the files occupy. In addition, the use of sectors should also be recorded on disk. When the file system reads and writes a file, it first finds the sector number used by the file, and then reads the contents of the file from it. If you want to write a file, the file system first finds the available sectors and appends the data. Update the file sector usage information at the same time. After mounting, you can treat the mount point as a new file system.
How fsck works: typically, when a Linux system starts up, run fsck first, which scans all the local file systems listed in the / etc/fstab file. The job of fsck is to ensure that the metadata of the file system to be mounted is available. When the system shuts down, fsck transfers all buffer data to disk and ensures that the file system is completely unmounted to ensure that the system can be used properly the next time it boots.
# # get the most appropriate block size:
# # View help:
# # then reformat the logical volume:
# # then mount both nodes:
# # then write a home page in node 2:
# # this will be synchronized to Node 3:
# # of course, adding a row to the home page of node 3 will also synchronize in node 2:
# # to mount the file system permanently, check UUID, and then add the mount information to / etc/fstable:
# # of course, the same mount should be done in node 3.
# # delete the previous ext3 file system in service group and add a new file system to the resource:
# # can the above step be changed to just delete the file system in service group without adding another file system, so that it will be automatically added when Apache is reopened?
# # check the status of iscsi, which is enabled.
# # View the mountable devices and pv in the system:
# # of course, when you're done, you'd better uninstall it first to see if it can be mounted automatically:
# # now enable Apache:
# # you can access:
The third part is to replace Apache with database:
# # first delete apache:
# # of course, you can delete it with the command, and then install MySQL:
# # enter the MySQl directory and open MySQL:
# # of course, it should also be installed on node 3:
# # check the storage because we want to hand over the resources to the cluster management, so we need to shut down the storage resources:
# # and mount the logical volume to the release directory of MySQL:
# # then mount the logical volume to / mnt, and move all the data in the MySQL publishing directory to / mnt, and then uninstall it from / mnt:
# # in this way, there is no file in MySQL, and the file is saved to the logical volume, and then mount-an is used to mount the logical volume to the default directory of the database:
# # in this way, the configuration file of the database will be returned.
# # delete the first page file of the apache of the database and open the database:
# # of course, in the end, we have to turn it off and hand it over to the cluster. What we have done here is to check that there are no errors in storage and MySQL, so it can be used normally:
# # the same work should be done in node 3:
# # add resources to the database or add a script, where the script only needs to enter the name and the startup path so that the cluster can manage the switch:
# # check the cluster and find that the database is not added:
# # this requires adding resources to the service:
# # Run exclusive: it means that this service monopolizes the cluster, which of course cannot be done. Then add our resource vip and database:
# # when you check the cluster, you find that the database service is added:
# # now you can log in to the database:
# # transfer the database service to node 3:
# # now the logical volume is mounted to node 3, and the focus VIP is also transferred to node 3:
# # Log in to the database on node 3:
# # deleting Cluster Database:
# # uninstall storage in both nodes:
# # the storage was not added to the previous work:
We mount logical volumes manually. Now add gfs resources to the MySQL service to achieve automatic mount:
# # then delete the previously added database in the DB service, and then add storage and database, because the cluster starts the resources in the service in the order of the resources, and there must be storage before the database.
# # now that the DB service is disabled, we can automatically deploy other resources, including enabling resources, as long as we turn on the DB service. When there is a problem with one storage, we will automatically jump to another machine.
# # automatic storage mount has been implemented:
# # transfer the DB service to node 2:
# # both storage and vip are transferred to node 2:
# # at this point, we have completed the high availability of the database service and solved the single point of failure of the database.
Part IV: the corresponding process of graphic modification:
# # disable and delete the DB service and delete the gfs2 resource:
Part V: high-availability clusters without graphics (high-availability clusters without luci) pacemake.
# # without luci after rhcl7, clusters cannot be managed with graphics. Here is an introduction to the high-availability cluster management of the fee graphical interface.
# # first delete everything previously done in the rhcl6, including detaching the node from the cluster and deleting it:
# # and cman,rgmanager
# # and modclusterd
# # and iscsi, which is closed, and the startup item is also closed:
# # on the machine where scsi is installed, check the iscsi:
# # disable scsi:
# # install pacemake on two nodes:
# # if you use a new virtual machine, you need to install heartbeat, drbd, mysql-server and httpd
# # after installing pacemake, there will be a configuration file directory that controls heartbeat:
# # Editing configuration files:
# original example content:
# # in fact, two changes have been made to the modified configuration file. One is to change the bound network address to the network address of the cluster, and add services in the last line:
....
# # purpose of the above changes:
# when the final ver is set to 1, the plug-in cannot open the daemon of pacemake. Here, we still set it to 0, which means that the daemon of pacemaker can be opened and transferred through components instead of scripts.
Plugin: plug-in
Daemons: a daemon means a process that runs in the background without a terminal or with a running shell.
# # you can download the official documents about pacmake on the official website:
# # copy the modified configuration file to node 3.
# # enable heartbeat service:
# # the error report here is due to stonish. By default, stonish is enabled, so verify will report an error and cannot commit, so you need to disable stonish before configuration:
# # then you need to install a package. It is best to install pssh before installing crm, because the second one is the first dependency. Of course, you can install it together, but you need to install it twice:
# # then copy the two packages to node 3 and install:
# # then you can use crm service:
Because of the previous reasons for coroysnc, the node here refers to the node that is added directly. If it is not added, you can add it with edit, or restart coroysnc, so you can add it.
# # then switch to the resource management module:
# # disable stonish:
# # in this way, there will be no errors in the detection syntax:
# # No matter which side adds the policy, both sides will synchronize:
# # define resources:
Example:
Crm (live) configure# primitive webvip ocf:heartbeat:IPaddr params ip=172.16.12.100 op monitor interval=30s timeout=20s on-fail=restart
/ / define a main resource name as webvip, resource agent category as ocf:heartbeat, and resource agent as IPaddr. Params: specify the defined parameters. Op represents action. Monitor sets a monitor, which is detected every 30s, with a timeout of 20s. Restart in case of failure.
# add a vip resource:
# # you can also use edit to edit configuration files:
# # on the 0.251 machine, you can ping the added resources:
# # explanation of parameters:
Crm_verify-LV inspection configuration information
Crm_mon monitors host status
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.