In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
This article describes how to create a ProxmoxVE cluster and use internally integrated Ceph as storage.
The cluster consists of five hosts, three of which install ceph as storage nodes and provide storage for the remaining two as compute nodes.
Here, all five hosts are in the same cluster.
Software version used in this article:
VMware-Workstation 15.5
pve(proxmox VE)6.0.1
Ceph: The Latest Version of Ceph Nautilus
ssh connection tool: xshell 6
1. Environmental configuration
We need to create a cluster of 5 pves as follows:
Storage node:
pve-store1: cpu 4 core memory 2g hard disk 20g+500g bridge network IP address 10.8.20.241
pve-store2: cpu 4 core memory 2g hard disk 20g+500g bridge network IP address 10.8.20.242
pve-store3: cpu 4 core memory 2g hard disk 20g+500g bridge network IP address 10.8.20.243
Compute node:
pve-compute1: cpu 4 core memory 4g hard disk 20g bridge network IP address 10.8.20.244
pve-compute2: cpu 4 core memory 4g hard disk 20g bridge network IP address 10.8.20.245
2. Installation configuration of storage nodes
Create storage nodes first, and use pve-store1 as mgr to create pv cluster
(1) Create a virtual machine
CPU, remember to turn on virtualization
virtual machine properties
(2) Installation of PPE
The system is installed on a 20-gigabyte hard drive.
installation summary
(3) Configure PPE
Connect to the newly installed host using xshell6 and perform the following tasks:
Install vim
apt update
apt install vim -y
Install Alibaba Cloud Source
cd /etc/apt
vim sources.list.d/pve-enterprise.list
modified to
deb http://download.proxmox.wiki/debian/pve buster pve-no-subscription
cd /etc/apt/
vim sources.list
Amend to read:
deb http://mirrors.aliyun.com/debian buster main contrib
deb http://mirrors.aliyun.com/debian buster-updates main contrib
#Security Update
deb http://mirrors.aliyun.com/debian-security/ buster/updates main contrib
update system
apt update
apt upgrade -y
Remove webui subscription tips
sed -i "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js
restart the system
init 6
Install pve-store2 and pve-store3 according to the above method, and then continue with the following operations
(4) Establishment of PV clusters
Log in to the web management platform of pve-store1 (https://10.8.20.241:8006), create a cluster named pve-cluster, and add 2 more hosts to it. For details, see the first article in this blog.
Build a good PV cluster
(5) Establishment of CEPH clusters
1. Install ceph on pve-store1, pve-store2, and pve-store3 by executing the following command
pveceph install (install without version number is the latest nautilus version)
Wait a moment, prompt below indicates successful installation
Ceph installed successfully
Note: Compute nodes do not require ceph installation.
2. Establish a ceph cluster network, and execute on all three storage nodes
pveceph init -network 10.8.20.0/24
3. Create ceph cluster storage Mon monitoring, which is executed on all three storage nodes.
pveceph createmon
Create monitor mon
Ceph Monitor: From this English name, we can know that it is a monitor, responsible for monitoring Ceph cluster, maintaining the health status of Ceph cluster, and maintaining various maps in Ceph cluster, such as OSD Map, Monitor Map, PG Map and CRUSH Map. These maps are collectively referred to as Cluster Map. Cluster Map is a key data structure of RADOS, managing all members, relationships, attributes and other information in the cluster as well as data distribution. For example, when users need to store data in Ceph cluster, OSD needs to obtain the latest Map through Monitor first. Then calculate the final storage location of the data according to the Map and object id.
4. Create the ceph cluster storage OSD service, execute it on all three storage nodes, and add a new 200G hard disk for the three storage nodes in advance.
pveceph createosd /dev/sdb
Creating OSD
Ceph OSD: The English full name of OSD is Object Storage Device, its main function is to store data, copy data, balance data, recover data, etc., and other OSD heartbeat check, and report some changes to Ceph Monitor. In general, a hard disk corresponds to an OSD, which manages the hard disk storage. Of course, a partition can also be an OSD.
5. Create a cluster storage resource pool and combine the above three OSD to provide storage services to the outside world.
Executed only on pve-store1 as mgr
ceph osd pool create pve-pool 128 128
The setting of 128(pg_num):
If there are less than 5 OSD, pg_num can be set to 128
When the number of OSD is between 5 and 10, pg_num can be set to 512
When the number of OSD is between 10 and 50, pg_num can be set to 4096
Create storage pool pve-pool
6. Create RBD Block Device Storage
RBD block storage is the most widely used and stable of the three storage types provided by CEPH. RBD blocks are similar to disks and can be mounted into physical or virtual machines. Here is the storage mounted on the pv host as the pv host (shared storage).
Log in to the web management page of pve-store1 and open it in turn: Data Center-> Storage-> Add-> Select RBD
Open the Add RBD dialog box
ID: fill in the id of pve-rbd RBD device
Resource pool: pve-pool, the resource pool to which it belongs
Monitor: pve-store1 pve-store2 pve-store3, Monitor
Node: Temporarily add pve-store1, pve-store2, and pve-store3 to indicate which hosts in the cluster can use the block device.
Content: The contents of the storage, select disk images and containers, so that ceph can store virtual machine images and downloaded LXC containers.
After adding, you can see that a storage will be added under the cluster host
rbd block device storage
You can view status information for the entire ceph
ceph cluster state
(6) Ceph related operations
1. Grant application related permissions
Executed on pve-store1
ceph osd pool application enable pve-pool rgw rbd
2. Check which storage pools ceph has
ceph osd lspools
3. Check the space size and available size of ceph
ceph df
You can see the current ceph storage cluster consisting of 3 500G hard disks, with a maximum available space of 473G≈500G.
4. Check the number of copies of ceph
ceph osd pool get pve-pool size
The default is 3, that is, there will be 3 copies of the data stored in the ceph cluster, so the available space of the whole ceph cluster is about (500*3 (the number of osd))/3 (the number of copies)=500G (the algorithm inside is very complex, so it is less than 500G, in fact, it is 473G)
Number of copies of ceph
Set the number of new copies to 2
ceph osd pool set pve-pool size 2
After setup, it may take a long time to configure the ceph cluster before it automatically returns to normal. Do not do this unless necessary.
Install ceph Dashboard for cluster
Executed on pve-store1
apt install ceph-mgr-dashboard -y
ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
ceph dashboard ac-user-create admin admin123 administrator (where admin is the user name admin123 is the password administrator)
systemctl restart ceph-mgr@pve-store1.service
Visit https://10.8.20.241:8443 and log in with the username admin and password admin123.
ceph Dashboard login interface
ceph Dashboard Main Interface
ceph Dashboard -Hosts
ceph Dashboard -Storage pool, you can see there are 3 copies
Edit Storage Pool to add 2 more apps
In short, using this Dashboard, you can manage ceph separately (in fact, generally do not).
3. Installation configuration of compute nodes
virtual machine creation
Virtual machine name: pve-compute1
CPU: 4 cores, open virtualization
Memory: 4g
Hard disk: 20g
Network: Bridging Mode
Compute node virtual machine configuration
Then install and configure pve, set its hostname to pve-compute1, ip address to 10.8.20.244/24, then change to domestic source, and finally update the system.
Update system successful
Add this host to the cluster above
Click the Data Center → Cluster → Join Information button in the web management platform of pve-store1 to copy the join information
Cluster join information
Then open the web management platform of pve-compute1, click Data Center → Cluster → Join Cluster button above, paste the cluster join information in the dialog box, and enter the root password of pve-store1 to join the cluster.
join the cluster
The newly added hosts can be seen in the web management platform of pve-store1
On pve-store1, select Data Center → Storage →pve-rbd, and then click the Edit button above.
,
Keep only pve-compute1 at the node
After modification, you can add a storage under the pve-compute1 node on the left.
Note that compute node pve-compute1 can be stored using ceph.
Create a second compute node pve-compute2 in the same way, and join the host as the node zaipve-brd did above
Finally, create and install the virtual machine test cluster on pve-compute1 or pve-compute2. Remember to use pve-rbd as storage for the virtual machine disk.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.