Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Pve+ceph Super Fusion (2)

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Create PVE Virtualization Cluster

1. First install three PVE hosts. The installation process refers to the previous article of this site.

2once the PVE host is installed, create a PVE Cluster virtualization cluster.

It needs to be created through PVE_shell.

3, execute the following command on the PVE1 node:

Pvecm create vclusters

4, execute the following command on the other two nodes to join the cluster

Pvecm add 192.168.77.160

Displaying successfully added node 'pve-2' to cluster is considered a success.

5. Check the status of the PVE cluster and confirm that the cluster has been established correctly

On the PVE-1 node

Pvecm status

Root@pve-1:~# pvecm status

Quorum information

Date: Wed Jul 3 11:27:05 2019

Quorum provider: corosync_votequorum

Nodes: 3

Node ID: 0x00000001

Ring ID: 1/12

Quorate: Yes

Votequorum information

Expected votes: 3

Highest expected: 3

Total votes: 3

Quorum: 2

Flags: Quorate

Membership informationNodeid Votes Name

0x00000001 1 192.168.77.160 (local)

0x00000002 1 192.168.77.170

0x00000003 1 192.168.77.180

All nodes have joined the cluster

Install Ceph on all nodes using the following command. (ceph can also be installed separately, and ceph installed separately can also be used for other purposes, as needed)

Pveceph install-version luminous

All nodes have joined the cluster

Install Ceph on all nodes using the following command. (ceph can also be installed separately, and ceph installed separately can also be used for other purposes, as needed)

Pveceph install-version luminous

Configure the ceph cluster storage network to execute on all nodes

Pveceph init-network 192.168.77.0Universe 24

Create a ceph cluster storage Mon monitor and execute the following command on all nodes

Pveceph createmon

View hard disk information

Root@pve-1:~# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

Sda 8:0 0 20G 0 disk

├─ sda1 8:1 0 1007K 0 part

├─ sda2 8:2 0 512M 0 part

└─ sda3 8:3 0 19.5G 0 part

├─ pve-swap 253:0 0 2.4G 0 lvm [SWAP]

├─ pve-root 253:1 0 4.8G 0 lvm /

├─ pve-data_tmeta 253:2 0 1G 0 lvm

│ └─ pve-data 253:4 0 8G 0 lvm

└─ pve-data_tdata 253:3 0 8G 0 lvm

└─ pve-data 253:4 0 8G 0 lvm

Sdb 8:16 0 50G 0 disk

Sdc 8:32 0 50G 0 disk

Sdd 8:48 0 50G 0 disk

Sr0 11:0 1 655.3M 0 rom

Rbd0 252:0 0 32G 0 disk

Create a ceph cluster storage OSD service that executes on all nodes

Because we simulated three hard drives, all add all three hard drives.

Pveceph createosd / dev/sdb

Pveceph createosd / dev/sdc

Pveceph createosd / dev/sdd

After the creation is completed, check the running status of OSD and make sure that OSD is running normally:

Root@pve-1:~# ceph osd tree

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF

-1 0.43822 root default

-7 0.14607 host pve-1

6 hdd 0.04869 osd.6 up 1.00000 1.00000

7 hdd 0.04869 osd.7 up 1.00000 1.00000

8 hdd 0.04869 osd.8 up 1.00000 1.00000

-3 0.14607 host pve-2

0 hdd 0.04869 osd.0 up 1.00000 1.00000

2 hdd 0.04869 osd.2 up 1.00000 1.00000

3 hdd 0.04869 osd.3 up 1.00000 1.00000

-5 0.14607 host pve-3

1 hdd 0.04869 osd.1 up 1.00000 1.00000

4 hdd 0.04869 osd.4 up 1.00000 1.00000

5 hdd 0.04869 osd.5 up 1.00000 1.00000

6. Create a cluster storage resource pool. PVE's built-in Ceph uses the rbd model: the outermost layer is pool, which is equivalent to a disk. The default

The name of pool is rbd. There can be multiple image in each pool, which is equivalent to a folder. Each image can be mapped to a block device

With the device, you can load it. Execute the following command on the PVE-1 node:

Ceph osd pool create mytest 128 128

128 represents the number of placement-group. Each pg is a virtual node that stores its own data in a different location. So that once

When storage is down, pg chooses new storage, thus ensuring automatic high availability.

7. Copy the storage ID and key to the specified file location, and do the following on the PVE-1 node:

Cd / etc/pve/priv/

Cp / etc/ceph/ceph.client.admin.keyring ceph/my-ceph-storage.keyring

At this point, the ceph distributed storage is basically completed, and it will be set up on the web side.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report