Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ProxmoxVE 5.3 Cluster installation and use of ceph

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

On the left above is my personal Wechat. For further communication, please add Wechat. On the right is my official account "Openstack Private Cloud". If you are interested, please follow us.

PromixVE series of articles:

Another option for proxmox- Private Cloud

Comparison between proxmox and openstack

Cluster installation of ProxmoxVE (V5.2)

Use of external ceph storage (luminous) for ProxmoxVE (V5.2)

ProxmoxVE 5.3 Cluster installation and use of ceph

Create centos7 basic image template for ProxmoxVE

Create win10 basic image template for ProxmoxVE

V2V migration of ProxmoxVE (vmware- > PVE)

Installation of ProxmoxVE oracle12C rac cluster

Installation of oracle12C databases for ProxmoxVE (CDB and PDB)

Oracle12C of ProxmoxVE, multiple CDB and PDB

The PVE5.3 version of disk management, including support for distributed storage ceph, is said to be more friendly. In the previous testing of version 5.2, there were some unfinished things, such as deploying ceph storage directly through pve's own hypervisor pveceph instead of using external ceph. Now we also happen to have the corresponding experimental resources on hand. Take some time to do a cluster experiment on version 5.3 and deploy it using pve's own ceph deployment tool.

The specific installation steps will not be written in detail. Refer to the installation steps of "Cluster installation of ProxmoxVE (V5.2)".

Using hardware configuration: on the physical machine pve environment, create 3 virtual machines, 3vcpu/12G memory / 4 hard drives / 2 network cards, of which one hard disk on the hard disk is the system disk 32G, two 32G CEPhs, one 32G LVM disk, one network card on the network card is used for both the cluster and the virtual machine (192.168.1.01and24 Universe), and one network card is used as the ceph storage network (192.168.170.0and24).

Important! Change the software source, the default is the subscription version, if you do not modify, in the use of pveceph init for ceph initialization installation will destroy the entire environment, remember! Change to an unsubscribed version, as follows:

Comment out the only record in the / etc/apt/sources.list.d/pve-enterprise.list file: # deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprisewget-Q-O-'http://download.proxmox.com/debian/pve/dists/stretch/proxmox-ve-release-5.x.gpg' | apt-key add-echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > / etc / apt/sources.list.d/pve-no-subscription.listapt update & & apt dist-upgrade

Install ntp. Ceph clusters have high requirements for time synchronization and need to install ntp service:

Apt-get install ntp-y

After the installation is completed, the time server of debian will be automatically started and synchronized. Note that in the case of networking, if you do not connect to the external network, you need to set the ntp service of the internal network, which will not be discussed here.

For ways to eliminate the "No valid subscription" prompt, refer to the following URL:

Https://johnscs.com/remove-proxmox51-subscription-notice/

That is, after ssh logs in to the server, execute the following instructions:

Sed-i.bak "s/data.status! = = 'Active'/false/g" / usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js & & systemctl restart pveproxy.service

After the cluster is created, as shown below:

Next, configure the ceph storage:

Refer to the official website:

Https://www.proxmox.com/en/training/video-tutorials/item/install-ceph-server-on-proxmox-ve

This is a detailed video tutorial, the basic design idea is similar to my environment, there are three nodes, only the network is the real three networks, I am because of conditions, cluster management and virtual machine business network into one, everything else is the same.

1. First install the ceph package on each node:

Pveceph install-version luminous

Note that for the jewel version used in the official website video, the pve version we installed is 5.3. the corresponding ceph version is already the luminous version.

2. Initialize the ceph storage network:

Pveceph init-- network 192.168.170.0Plus 24

3. Create mon

Pveceph createmon

Log in to the web interface on pve2 and pve3 to create a mon:

4. Create an OSD:

Add sdb and sdc to osd. Similarly, do the same for pve2 and pve3, and complete as follows:

5. Create pools. The name of pool is ceph-vm. According to your needs, size=3 represents the number of normal copies, min size=2 represents the minimum number of data copies, and pg_num=128 indicates that the number of logical storage units is less than 128.For more information on how to set policies, please see the relevant ceph documentation:

Create another test storage pool, test. After the creation is completed, you can see the overall status of ceph through the web interface as follows:

You can use the test test storage pool to do a write test:

Root@pve1:~# rados-p test bench 10 write-- no-cleanuphints = 1Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objectsObject prefix: benchmark_data_pve1_78670 sec Cur ops started finished avg MB/s cur MB/s last lat (s) avg lat (s) 0 000-0 116 23 7 28.0015 28 0.98669 0.886351 2 16 36 20 39.9226 52 1.98944 1.22191 3 16 43 27 35.9512 28 0.57703 1.17363 4 16 54 38 37.9597 44 1.64156 1.29788 5 16 60 44 35.1689 24 2.00744 1.37846 6 16 68 52 34.6399 32 2.2513 1.50074 7 16 72 56 31.9779 16 2.1816 1.56136 816 79 63 31.4803 28 2.38234 1.70338 9 16 87 71 31.5374 32 1.63451 1.8151 10 16 95 79 31.5831 32 2.82273 1.86001 11 8 95 87 31.6205 32 1.66285 1.84418Total time run: 11.064831Total writes made: 95Write size: 4194304Object Size: 4194304Bandwidth (MB/sec): 34.343Stddev Bandwidth: 9.54225Max bandwidth (MB/sec): 52Min bandwidth (MB/sec): 16Average IOPS: 8Stddev IOPS: 2Max IOPS: 13Min IOPS: 4Average Latency (s): 1.85056Stddev Latency (s): 0.714602Max latency (s): 3.46178Min latency (s): 0.57703root@pve1:~#

Because I use a pve virtual machine made of 7200 rpm disk, and mount a virtual disk made of 7200 rpm disk, the performance is not good. In the formal production environment, if it is an SSD disk, the bandwidth can reach 400m and the IOPS can reach more than 100.

Here is the read test:

Root@pve1:~# rados-p test bench 10 seqhints = 1 sec Cur ops started finished avg MB/s cur MB/s last lat (s) avg lat (s) 00 00-01 16 61 45 179.942 0.509001 0.276338 2 14 95 81 161.953 161.953 0.565471 0.309453Total time run: 2.127305Total reads made: 95Read size: 4194304Object size: 4194304Bandwidth (MB/sec): 178.63Average IOPS: 44Stddev IOPS: 6Max IOPS: 45Min IOPS: 36Average Latency (s): 0.350031Max latency (s): 0.844504Min latency (s): 0.0595479root@pve1:~#

The bandwidth and IOPS of the read test is much higher than that of the write. Similarly, due to the test environment, the performance is poor. If it is an SSD disk, it can reach 2.5G bandwidth and 600IOPS, which is also quite good.

6. Create a RBD storage pool:

Next, use this RBD storage pool to create a centos7 virtual machine:

When the start is created, it is indicated that the kvm virtualization setting is not supported, because we use the kvm virtual machine for testing, and then nest in the kvm virtual machine to create the kvm virtual machine:

As a solution, first shut down all virtual machines on the pve node and execute the following command on the ProxmoxVE physical machine (note, not the above three pve1/pve2/pve3):

Root@pve:~# modprobe-r kvm_intel note that if prompted by... busy, it means that there are still virtual machines in use, and all virtual machines on this pve node need to be turned off first. Root@pve:~# modprobe kvm_intel nested=1root@pve:~# cat / sys/module/kvm_intel/parameters/nestedYroot@pve:~# echo "options kvm_intel nested=1" > > / etc/modprobe.d/modprobe.confroot@pve:~# qm showcmd 111/usr/bin/kvm-id 111111name pve-1-chardev 'socket,id=qmp,path=/var/run/qemu-server/111.qmp,server,nowait'-mon' chardev=qmp,mode=control'-pidfile / var/run/qemu-server/111.pid-daemonize-smbios' type=1 Uuid=d9eb0729-f0ee-4176-836dMUE 308b70d13754'-smp '3jocketsMaxcpuspus3'-nodefaults-boot' menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg'-vga std-vnc unix:/var/run/qemu-server/111.vnc,x509,password-cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce-m 12000-device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e'-device' pci-bridge Id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f'-device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2'-device' usb-tablet,id=tablet,bus=uhci.0,port=1'-device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3'-iscsi' initiator-name=iqn.1993-08.org.Debian Vista 01Vera b48afece2d1'-drive 'cdrom-1.Isogravity if you don't know where to go. Aio=threads'-device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200'-device' virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5'-drive 'file=/dev/pvevg2/vm-111-disk-7,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on'-device' scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100'-drive 'file=/dev/pvevg2/vm-111-disk-2,if=none,id=drive-scsi1,format=raw,cache=none Aio=native,detect-zeroes=on'-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1'-drive' file=/dev/pvevg2/vm-111-disk-3,if=none,id=drive-scsi2,format=raw,cache=none,aio=native,detect-zeroes=on'-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2'-drive' file=/dev/pvevg2/vm-111-disk-6,if=none,id=drive-scsi3,format=raw,cache=none,aio=native Detect-zeroes=on'-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3'-netdev' type=tap,id=net0,ifname=tap111i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on'-device 'virtio-net-pci,mac=76:60:17:9D:6A:FF,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'-netdev' type=tap,id=net1,ifname=tap111i1,script=/var/lib/qemu-server/pve-bridge Downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on'-device 'virtio-net-pci,mac=6A:93:EB:0E:A8:84,netdev=net1,bus=pci.0,addr=0x13,id=net1,bootindex=301' found the "- cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce" part. Add the following parameter "+ vmx," before enforce:-cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,+vmx,enforce and execute: qm stop 111stops the virtual machine, and then executes the above modified command. After the virtual machine starts, ssh enters and executes grep vmx / proc/cpuinfo to see if there is any output. As follows: root@pve-1:~# grep vmx / proc/cpuinfo flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology cpuid pni vmx cx16 x2apic hypervisor lahf_lm cpuid_fault pti tpr_shadow vnmi flexpriority ept vpidflags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology cpuid pni vmx cx16 x2apic hypervisor lahf_lm cpuid_fault pti tpr_shadow vnmi flexpriority Ept vpidflags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology cpuid pni vmx cx16 x2apic hypervisor lahf_lm cpuid_fault pti tpr_shadow vnmi flexpriority ept vpid indicates that virtual nesting is already supported.

Restart the virtual machine created above, successfully!

The test was successful!

Summary:

ProxmoxVE is indeed too convenient to use, for slightly complex ceph distributed storage, has achieved the limit of convenient implementation, only a few simple initialization instructions are needed, and then all are configured and deployed through the web management interface, which is extremely convenient. Even better, the web management interface can also see the various states of ceph storage in real time, which is equivalent to completely integrating the management of ceph storage into the unified management of PVE.

But the premise of using ceph is that you must understand some concepts of ceph distributed storage itself, such as the storage rules of ceph, the concept of pg, the concept of bucket, the concept of mon, osd, pool, and so on. If you are not familiar with these concepts, you should first read through the documents on the official website of the ceph community, and then build a ceph storage experiment. Once you're familiar with ceph, it's easy to go back and use the ceph management capabilities that come with pve.

Through the above experiments, we can also see that ProxmoxVE can quickly build an environment of nested virtualization on its own platform, which can facilitate the majority of PVE enthusiasts to quickly familiarize themselves with the use of PVE. Through my personal experience, the test process and complexity of openstack is many times higher than that of PVE. If openstack wants to do a complete experiment, skillfully, if it takes 5 days, then the experiment of PVE, also under the condition of basic virtualization function, may only take 1 day or even half a day to complete.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report