Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ProxmoxVE stand-alone installation (2 servers are not clustered)

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

On the left above is my personal Wechat. For further communication, please add Wechat. On the right is my official account "Openstack Private Cloud". If you are interested, please follow us.

The company has two test servers, Huawei's RH2288 V3, with 64G memory, two 16-core CPU and four 1TSAS disks, which were originally used on a single machine. In order to make full use of hardware resources, the two servers are planned to install PVE5-3 for virtualization. The specific design ideas are as follows:

The two servers are mainly used in the company's internal development and test environment, and they are also planned for cloud desktops, so some security needs to be considered. Specific planning:

1. In terms of storage, the first two of the four hard drives of the server do raid1, install the pve system, and the subsequent cloud desktop virtual machines and important virtual machines use this disk; the last two disks are used as lvm for the virtual machine, and the other is made into NFS shared storage for backup to the virtual machine. And virtual machine backup pay attention to cross-backup, for example, virtual machine on pve1 is backed up to NFS disk on pve2, virtual machine on pve2 is backed up to nfs on pve1, so that if a server fails, data can be protected as much as possible.

2. In terms of network, there are two networks, one of which is the management network (cluster management corosync network) and the other is the physical network (virtual machine business network). Since ceph distributed storage is not used, the design of the storage network is not involved here. When it is really used, because the internal network traffic of the company is not very large, when building a virtual machine, the management network will be used at the same time, that is, the virtual machine will have two network cards at the same time, which will be connected to two networks respectively.

3. For the relationship between two servers, because there are only two physical servers, three are needed to build a cluster, because two clusters will have brain cracks when one server shuts down, so it is not suitable to build a cluster. At the same time, in order to maximize the use of hardware resources and save power, one or all of them will be turned off during work and holidays, so my environment is not suitable for clustering.

First, raid1 the first two disks of the physical machine, as shown below:

Then install pve5-3, which will not be described in detail, mainly setting the root password and setting the IP address, as shown below:

Install a PVE node in about 20 minutes. After the installation is complete, you can enter the web management interface of pve to use it, as shown below:

Next, configure the network, create a new bridge and configure the address of the 192.168.1.0 to 24 network segment to it, and bridge it with another network card, as shown below:

Note that the network configuration will not take effect until the server is restarted. There is no gateway configured above because another bridge has already been configured with a gateway. If it is also configured here, an error will be reported.

Next, configure the storage as follows:

Pvcreate / dev/sdb-divide the second disk into a pv vgcreate pve1-vg-sdb / dev/sdb-create vg lvcreate-- thin-L 900G-n pve1-lvm-vm pve1-vg-sdb-give all the space of vg to lv, type is thin,lv, name is pve1-lvm-vm

After that, you can create the corresponding lvm in the web management interface of pve, as shown below:

Next, create a nfs server and mount the fourth disk to the nfs service as a nfs shared disk.

Pvcreate / dev/sdc-divide the third disk into a pv vgcreate pve1-vg-sdc / dev/sdc-create vg lvcreate-l 100%VG-n pve1-lvm-nfs pve1-vg-sdc-give all the space of vg to lv The lv name is pve1-lvm-nfsmkfs.xfs / dev/pve1-vg-sdc/pve1-lvm-nfsmkdir-p / data/pve1-nfs to set the mount point: mount / dev/pve1-vg-sdc/pve1-lvm-nfs / data/pve1-nfs writes the following line in the / etc/fstab file to enable boot to mount automatically: / dev/pve1-vg-sdc/pve1-lvm-nfs / data/pve1-nfs xfs defaults 00 executes the following command to install the nfs server. Apt-get install nfs-common nfs-kernel-server configuration shared directory vi / etc/exports adds the following: / data/pve1-nfs * (rw,sync,no_root_squash,no_subtree_check,insecure) starts nfs server / etc/init.d/nfs-kernel-server start Note: if the content of / etc/exports is modified, execute exportfs-a to update

In this way, the nfs shared directory shared above can be used on the web interface, as shown below:

Finally, change the software source. The default is the subscription version, which is charged and changed to a non-subscription version, as follows:

Comment out the only record in the / etc/apt/sources.list.d/pve-enterprise.list file: # deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprisewget-Q-O-'http://download.proxmox.com/debian/pve/dists/stretch/proxmox-ve-release-5.x.gpg' | apt-key add-echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > / etc / apt/sources.list.d/pve-no-subscription.listapt update & & apt dist-upgrade

At this point, a PVE node is done!

The second PVE node is basically the same, but the difference is that this server does not do raid, makes full use of storage resources, configures storage, divides the two disks of sdb and sdc into a vg, and uses thin lv exclusively for virtual machine storage pool, as follows:

Pvcreate / dev/sdb-divide the second disk into a pv pvcreate / dev/sdc-divide the third disk into a pv vgcreate pve2-vg-vm / dev/sdb / dev/sdc-create vg lvcreate-- thin-L 1800G-n pve2-lvm-vm pve2-vg-vm-give all the space of the vg to lv, the type is thin,lv, the name is pve1-lvm-vm.

The second server creates a nfs server and mounts the fourth disk to the nfs service as a nfs shared disk.

Pvcreate / dev/sdd-divide the fourth disk into a pv vgcreate pve2-vg-nfs / dev/sdd-create vg lvcreate-l 100%VG-n pve2-lvm-nfs pve2-vg-nfs-give all the space of vg to lv The lv name is pve2-lvm-nfsmkfs.xfs / dev/pve2-vg-nfs/pve2-lvm-nfsmkdir-p / data/pve2-nfs to set the mount point: mount / dev/pve2-vg-nfs/pve2-lvm-nfs / data/pve2-nfs writes the following line in the / etc/fstab file to enable boot to mount automatically: / dev/pve2-vg-nfs/pve2-lvm-nfs / data/pve2-nfs xfs defaults 00 executes the following command to install the nfs server. Apt-get install nfs-common nfs-kernel-server configuration shared directory vi / etc/exports adds the following: / data/pve2-nfs * (rw,sync,no_root_squash,no_subtree_check,insecure) starts nfs server / etc/init.d/nfs-kernel-server start Note: if the content of / etc/exports is modified, execute exportfs-a to update

Start uploading the ISO image and creating a virtual machine. If you use a container virtual machine, download the template template image, which is not discussed here.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report