In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how the KVM virtual machine to achieve online hot migration function, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
I. the migration mode of KVM virtual machine and the problems that should be paid attention to.
There are two ways to migrate KVM virtual machines:
1. Static migration (cold migration): for cold migration, when the virtual machine is turned off, copy the disk file and .xml configuration file of the virtual machine (these two files make up a virtual machine) to the target host to which you want to migrate, and then redefine the virtual machine on the target host using the "virsh define * .migration" command.
2. Dynamic migration (hot migration): for hot migration, it is more common. Usually, some businesses are running on this server, and these services are not allowed to be interrupted, so you need to use hot migration. This blog post will write out the steps of hot migration in detail.
1. Cold migration
Usually the directory where we store the virtual machine disk is the disk of a nfs file system, which is usually the LVM file system. So when you need a cold migration, as long as you mount the nfs file system on the target host, you can see the disk file of the virtual machine to be migrated, usually ending in .qcow2 or .raw, and then simply send the .xml configuration file of the virtual machine to the target server, and then redefine it to view the migrated virtual machine through the "virsh list-- all" command.
2. Heat transfer
If the source host and the destination host share the storage system, only the client's vCPU execution needs to be sent over the network
The state, the contents in memory, and the state of the virtual device to the destination host. Otherwise, you also need to send the client's disk storage to the destination host
On the plane. A shared storage system means that the mirror file directories of the source and destination virtual machines are on a shared storage.
When based on a shared storage system, the specific process of dynamic migration of KVM is:
1. At the beginning of the migration, the client is still running on the host, while the client's memory pages are transferred to the destination host.
2. QEMU/KVM will monitor and record any modifications to all the inner pages that have been transferred during the migration, and begin to transfer the changes to the memory pages in the previous process after all the memory pages have been transferred.
3. QEMU/KVM estimates the transfer speed during the migration. When the remaining amount of memory data can be transferred within a set time period (default is 30 milliseconds), QEMU/KVM will shut down the client on the source host, then transfer the remaining amount of data to the destination host, and finally restore the running state of the client on the destination host.
4. At this point, the dynamic migration operation of KVM is completed. The migrated client is as consistent as it was before the migration, unless some configuration, such as a bridge, is missing on the destination host. Note that when the memory usage in the client is very large and frequently modified, and the data in memory is constantly modified faster than the memory speed that KVM can transfer, the process of dynamic migration cannot be completed, and it can only be moved statically.
3. Matters needing attention in migration
Whether it is cold migration or hot migration, the precautions are mostly the same.
The requirements for the target server before migration are as follows:
The best migrated server is the same as the cpu brand.
64-bit can only be migrated between 64-bit hosts, while 32-bit can migrate 32-bit and 64-bit hosts.
Virtual machine names in the host cannot conflict.
The software configuration of the destination host and the source host should be the same as possible, such as the same bridging network card, resource pool, etc.
The settings of the two migrated hosts cat / proc/cpuinfo | grep nx are the same NX, with the full name "No eXecute", that is, "disable running", which is a technique applied in CPU to separate memory areas into storage processor instruction sets only or for data use only. Any memory that uses NX technology represents data only, so the processor's instruction set cannot be stored in these areas. This technology can prevent most buffer overflows, that is, some malicious programs, put their own set of malicious instructions in the data store of other programs and run them, thus controlling the whole computer.
Summary:
1. Static migration
Copy image files and virtual machine configuration fil
Redefine this virtual machine.
2. Dynamic migration
Create shared storage
Two machines mount shared storage (mount manually; use resource pool)
Start dynamic migration
Create a migrated virtual machine profile
Redefine the virtual machine.
2. Kvm virtual machine hot migration configuration example
1. Environmental preparation:
My environment here is as follows:
Three Linux servers, two of which are KVM servers, and the IP is 192.168.20.2 and 192.168.20.3. One is a NFS server, and the IP is 192.168.20.4, which is used for shared storage (three servers are required to ping each other)
Both KVM virtual machines must have a KVM environment.
My KVM environment is ready-made, so I won't show it here. If you don't have a KVM environment, you can refer to the blog article: KVM Virtualization basic Management to build (very simple, yum installs some packages, starts the "libvirtd" service, and may need to restart the server).
2. Configure NFS shared storage
The nfs server 192.168.20.4 is configured as follows:
[root@nfs ~] # yum-y install nfs-utils rpcbind # installation package [root@localhost ~] # systemctl enable nfs # set NFS boot [root@localhost] # systemctl enable rpcbind # set rpcbind boot [root@nfs] # mkdir-p / nfsshare # create a shared directory [root@nfs ~] # vim / etc/exports # edit the configuration file of NFS Default is empty / nfsshare * (rw,sync,no_root_squash) # the first column represents the shared directory # the asterisk of the second column represents all network access allowed # rw represents read and write permissions; sync represents synchronous writing to disk; # no_root_squash means that the current client is granted local root permission when accessing as root (default is root_squash, which will be treated as a nfsnobody user). If you do not add no_root_squash,#, it may result in being reduced and unable to read and write (wr). [root@nfs ~] # systemctl restart rpcbind # start the service [root@nfs ~] # systemctl restart nfs # start the service [root@nfs ~] # netstat-anpt | grep rpc # confirm that the service has been started [root@nfs ~] # showmount-e # View the local shared directory Export list for nfs:/nfsshare * [root@nfs ~] # firewall-cmd-- add-service=rpc-bind-- permanent [root@nfs ~] # firewall-cmd-- add- Service=nfs-- permanent [root@nfs ~] # firewall-cmd-- add-service=mountd-- permanent [root@nfs ~] # systemctl restart firewalld # restart firewall Make the configuration effective
NFS server configuration is complete at this point!
My migration operation here depends on the desktop graphical environment. If you need to use command migration, you can download this document for reference. I have not studied using command migration.
The two KVM servers are configured as follows (both kvm hosts need to be configured as follows):
1. Install the rpcbind package and start the rpcbind service. In order to use the showmount query tool, install nfs-utils as well:
[root@localhost ~] # yum-y install nfs-utils rpcbind [root@localhost ~] # systemctl enable rpcbind [root@localhost ~] # systemctl start rpcbind [root@kvm ~] # showmount-e 192.168.20.4 # query the directory shared by the nfs server Export list for 192.168.20.4:/nfsshare * [root@kvm ~] # mount-t nfs 192.168.20.4:/nfsshare / kvm/disk/ # to mount [root@kvm ~] # df- HT / kvm/disk/ file system type capacity available available mount point 192.168.20.4:/nfsshare nfs4 50G 33m 50G 1% / kvm/disk# write a test file on one of the servers See if you can see [root@kvm1 ~] # touch / kvm/disk/test # on other servers to create a test file [root@kvm2 ~] # ls / kvm/disk # on one of the kvm servers to ensure that test can also be seen in the directory of the second kvm server.
At this point, it is ensured that the directories used by the two kvm servers are stored on the same disk (Note: the directory path where the two kvm virtual machines are mounted to the nfs file system must be the same. Here, both kvm virtual machines are mounted to the / kvm/disk/ directory, otherwise errors will occur in later operations).
3. Create storage volumes on the two kvm servers:
[root@kvm1 ~] # virt-manager # Open the virtual machine console
In the following dialog, the destination path is the KVM native "/ kvm/disk", the hostname is the IP address of the nfs server, and the source path is the directory shared by the nfs server.
The above operations also need to be performed on the second KVM, and it is best to define the same storage pool name. To avoid unnecessary trouble.
3. Create a new virtual machine on kvm1 for migration testing
:
Upload an iso system file of centos by yourself. You need to specify the iso file to be installed:
At this point, you can normally install the virtual machine on your own.
4. Configure the newly created virtual machine network in Bridge mode, and you can ping the extranet.
The following operations are mainly used to simulate the hot migration of virtual machines in providing services to public network users.
1) kvm1 operations are as follows:
[root@kvm ~] # systemctl stop NetworkManager # stop this service [root@kvm ~] # virsh iface-bridge ens33 br0 # when executing this command, do not mind if you prompt the following information Because it already exists, using additional device br0 to generate bridging ens33 failed to launch bridging interface br0 [root@kvm ~] # ls / etc/sysconfig/network-scripts/ | grep br0 ifcfg-br0 # confirm that this file is available. [root@kvm ~] # virsh destroy centos7.0 # close the newly created virtual machine domain centos7.0 is deleted [root@kvm ~] # virsh edit centos7.0 # edit the configuration file of the virtual machine Navigate to interface # change this to bridge # delete the Mac address line # change it to this # Save and exit [root@kvm1 ~] # virsh start centos7.0 domain centos7.0 has started
After enabling the virtual machine, configure the network card configuration file of the virtual machine. The default network card file is ifcfg-eth0:
Restart the network service and confirm the IP address:
You can now execute the "ping www.baidu.com" command on the virtual machine to keep it on the ping public network.
2) kvm2 operations are as follows:
[root@kvm ~] # systemctl stop NetworkManager # stop this service [root@kvm ~] # virsh iface-bridge ens33 br0 # when executing this command, if you prompt the following message, don't worry, because it already exists, because it has failed to generate bridging ens33 using additional device br0 failed to launch bridging interface br0 [root@kvm ~] # ls / etc/sysconfig/network-scripts/ | grep br0 ifcfg-br0 # make sure there is this file # because kvm2 does not have a virtual machine So just change the network to bridging mode. # the above configuration is to prevent the virtual machine from being unable to contact the public network after it is migrated to this server.
5. Start preparing for hot migration of the newly built centos 7
1) do the following on the kvm1 server:
[root@kvm1 ~] # virt-manager # Open the virtual machine console
Fill in as follows, and when you are finished, click Connect:
You will be prompted to install the following software packages:
To install:
[root@kvm1 ~] # yum-y install openssh-askpass
According to the pop-up dialog box prompt, enter "yes":
Enter the root password of the target host:
6. Start heat transfer
Wait for the migration to complete, which is quick:
Migration completed:
Now go to the target kvm server and open the newly migrated virtual machine (you will find that the ping command continues without interruption at all):
You can use "virsh list-all" to confirm whether the virtual machine has actually migrated to the second kvm server on two kvm servers.
These are all the contents of the article "how to achieve online hot migration in KVM virtual machines". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.