Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Common commands of ceph operation and maintenance staff

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

The cluster starts a ceph process to view the monitoring status of the machine to view ceph storage space keys and user monmon status information mon operations msdmsd status and information mds operations osdosd status and information osd operations PG group PG information pg operations pool view pool information pool operations rados instructions View create delete rbd command View Mirror Operations Snapshot Operation to export a mirror in ceph pool

The cluster starts a ceph process

Start the mon process

Service ceph start mon.node1

Start the msd process

Service ceph start mds.node1

Start the osd process

Service ceph start osd.0

Check the monitoring status of the machine [root@client ~] # ceph health

HEALTH_OK

View the real-time running status of ceph

[root@client] # ceph-w

Cluster be1756f2-54f7-4d8f-8790-820c82721f17

Health HEALTH_OK

Monmap e2: 3 mons at {node1=10.240.240.211:6789/0,node2=10.240.240.212:6789/0,node3=10.240.240.213:6789/0}, election epoch 294, quorum 0,1,2 node1,node2,node3

Mdsmap e95: 1-1-1 up {0=node2=up:active}, 1 up:standby

Osdmap e88: 3 osds: 3 up, 3 in

Pgmap v1164: 448 pgs, 4 pools, 10003 MB data, 2520 objects

23617 MB used, 37792 MB / 61410 MB avail

448 active+clean

2014-06-30 00 mon.0 48 mon.0 [INF] pgmap v1163: 448 pgs: 448 active+clean; 10003 MB data, 23617 MB used, 37792 MB / 61410 MB avail

Check information status information

[root@client] # ceph-s

Cluster be1756f2-54f7-4d8f-8790-820c82721f17

Health HEALTH_OK

Monmap e2: 3 mons at {node1=10.240.240.211:6789/0,node2=10.240.240.212:6789/0,node3=10.240.240.213:6789/0}, election epoch 294, quorum 0,1,2 node1,node2,node3

Mdsmap e95: 1-1-1 up {0=node2=up:active}, 1 up:standby

Osdmap e88: 3 osds: 3 up, 3 in

Pgmap v1164: 448 pgs, 4 pools, 10003 MB data, 2520 objects

23617 MB used, 37792 MB / 61410 MB avail

448 active+clean

[root@client ~] #

View ceph storage space [root@client ~] # ceph df

GLOBAL:

SIZE AVAIL RAW USED RAW USED

61410M 37792M 23617M 38.46

POOLS:

NAME ID USED USED OBJECTS

Data 0 10000M 16.28 2500

Metadata 1 3354k 0 20

Rbd 2 0 0 0

Jiayuan 3 0 0 0

[root@client ~] #

Delete all ceph packets of a node

[root@node1 ~] # ceph-deploy purge node1

[root@node1 ~] # ceph-deploy purgedata node1

Keys and users

Create an admin user for ceph and a key for the admin user, and save the key to the / etc/ceph directory:

# ceph auth get-or-create client.admin mds' allow' osd 'allow' mon' allow' > / etc/ceph/ceph.client.admin.keyring

Or

# ceph auth get-or-create client.admin mds' allow' osd 'allow' mon' allow'-o / etc/ceph/ceph.client.admin.keyring

Create a user for osd.0 and create a key

# ceph auth get-or-create osd.0 mon 'allow rwx' osd' allow *'- o / var/lib/ceph/osd/ceph-0/keyring

Create a user for mds.node1 and create a key

# ceph auth get-or-create mds.node1 mon 'allow rwx' osd' allow 'mds' allow'- o / var/lib/ceph/mds/ceph-node1/keyring

View authenticated users and related key in ceph cluster

Ceph auth list

Delete an authenticated user in the cluster

Ceph auth del osd.0

View the detailed configuration of the cluster

[root@node1 ~] # ceph daemon mon.node1 config show | more

View details of cluster health status

[root@admin ~] # ceph health detail

HEALTH_WARN 12 pgs down; 12 pgs peering; 12 pgs stuck inactive; 12 pgs stuck unclean

Pg 3.3b is stuck inactive since forever, current state down+peering, last acting [1,2]

Pg 3.36 is stuck inactive since forever, current state down+peering, last acting [1,2]

Pg 3.79 is stuck inactive since forever, current state down+peering, last acting [1,0]

Pg 3.5 is stuck inactive since forever, current state down+peering, last acting [1,2]

Pg 3.30 is stuck inactive since forever, current state down+peering, last acting [1,2]

Pg 3.1a is stuck inactive since forever, current state down+peering, last acting [1,0]

Pg 3.2d is stuck inactive since forever, current state down+peering, last acting [1,0]

Pg 3.16 is stuck inactive since forever, current state down+peering, last acting [1,2]

View the directory where the ceph log log is located

[root@node1] # ceph-conf-name mon.node1-show-config-value log_file

/ var/log/ceph/ceph-mon.node1.log

Monmon status information

View the status information of mon

[root@client ~] # ceph mon stat

E2: 3 mons at {node1=10.240.240.211:6789/0,node2=10.240.240.212:6789/0,node3=10.240.240.213:6789/0}, election epoch 294, quorum 0,1,2 node1,node2,node3

View the election status of mon

[root@client ~] # ceph quorum_status

{"election_epoch": 294, "quorum": [0pje 1pj2], "quorum_names": ["node1", "node2", "node3"], "quorum_leader_name": "node1", "monmap": {"epoch": 2, "fsid": "be1756f2-54f7-4d8f-8790-820c82721f17", "modified": "2014-06-26 1843 rides 51.671106", "created": "0.000000", "mons": [{"rank": 0, "name": "node1" "addr": "10.240.240.211 name 6789\ / 0"}, {"rank": 1, "name": "node2", "addr": "10.240.240.212 name 6789\ / 0"}, {"rank": 2, "name": "node3", "addr": "10.240.240.213 node2 6789\ / 0"]}}

View the mapping information of mon

[root@client ~] # ceph mon dump

Dumped monmap epoch 2

Epoch 2

Fsid be1756f2-54f7-4d8f-8790-820c82721f17

Last_changed 2014-06-26 18140 4314 51.671106

Created 0.000000

0: 10.240.240.211:6789/0 mon.node1

1: 10.240.240.212:6789/0 mon.node2

2: 10.240.240.213:6789/0 mon.node3

Mon operation

Delete a mon node

[root@node1 ~] # ceph mon remove node1

Removed mon.node1 at 10.39.101.1:6789/0, there are now 3 monitors

2014-07-07 18 hunting for new mon 11 7f4d16bfd700 04.974188 0 monclient: hunting for new mon

Get a running mon map and save it in the 1.txt file

[root@node3] # ceph mon getmap-o 1.txt

Got monmap epoch 6

Check the map obtained above

[root@node3 ~] # monmaptool-print 1.txt

Monmaptool: monmap file 1.txt

Epoch 6

Fsid 92552333-a0a8-41b8-8b45-c93a8730525e

Last_changed 2014-07-07 18 purl 22 purl 51.927205

Created 0.000000

0: 10.39.101.1:6789/0 mon.node1

1: 10.39.101.2:6789/0 mon.node2

2: 10.39.101.3:6789/0 mon.node3

[root@node3 ~] #

Inject the above mon map into the newly added node

Ceph-mon-I node4-inject-monmap 1.txt

View the amin socket of mon

Root@node1 ~] # ceph-conf-name mon.node1-show-config-value admin_socket

/ var/run/ceph/ceph-mon.node1.asok

View the detailed status of mon

[root@node1 ~] # ceph daemon mon.node1 mon_status

{"name": "node1"

"rank": 0

"state": "leader"

"election_epoch": 96

"quorum": [

0

one,

2]

"outside_quorum": []

"extra_probe_peers": [

"10.39.101.4Suzhou 6789\ / 0"]

"sync_provider": []

"monmap": {"epoch": 6

"fsid": "92552333-a0a8-41b8-8b45-c93a8730525e"

"modified": "2014-07-07 18 purl 22 purl 51.927205"

"created": "0.000000"

"mons": [

{"rank": 0

"name": "node1"

"addr": "10.39.101.1 virtual 6789\ / 0"}

{"rank": 1

"name": "node2"

"addr": "10.39.101.2 VR 6789\ / 0"}

{"rank": 2

"name": "node3"

"addr": "10.39.101.3 virtual 6789\ / 0"}]}

Delete a mon node

[root@os-node1 ~] # ceph mon remove os-node1

Removed mon.os-node1 at 10.40.10.64:6789/0, there are now 3 monitors

Msdmsd status and Information

View msd status

[root@client ~] # ceph mds stat

E95: 1-1-1 up {0=node2=up:active}, 1 up:standby

View the mapping information of msd

[root@client ~] # ceph mds dump

Dumped mdsmap epoch 95

Epoch 95

Flags 0

Created 2014-06-26 18 purl 41 purl 57.686801

Modified 2014-06-30 00 purl 24purl 11.749967

Tableserver 0

Root 0

Session_timeout 60

Session_autoclose 300

Max_file_size 1099511627776

Last_failure 84

Last_failure_osd_epoch 81

Compat compat= {}, rocompat= {}, incompat= {1=base v0.20

Max_mds 1

In 0

Up {00005015}

Failed

Stopped

Data_pools 0

Metadata_pool 1

Inline_data disabled

5015: 10.240.240.212 purl 6808 Unix 3032 'node2' mds.0.12 up:active seq 30

5012: 10.240.240.211 purl 6807 Unix 3459 'node1' mds.-1.0 up:standby seq 38

Mds operation

Delete a mds node

[root@node1 ~] # ceph mds rm 0 mds.node1

Mds gid 0 dne

Osdosd status and Information

View ceph osd running status

[root@client ~] # ceph osd stat

Osdmap e88: 3 osds: 3 up, 3 in

View osd mapping information

[root@client ~] # ceph osd dump

Epoch 88

Fsid be1756f2-54f7-4d8f-8790-820c82721f17

Created 2014-06-26 18 purl 41 purl 57.687442

Modified 2014-06-30 00 Flex 46 purl 27.179793

Flags

Pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool crash_replay_interval 45 stripe_width 0

Pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool stripe_width 0

Pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool stripe_width 0

Pool 3 'jiayuan' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 73 owner 0 flags hashpspool stripe_width 0

Max_osd 3

Osd.0 up in weight 1 up_from 65 up_thru 75 down_at 64 last_clean_interval [53Magne55) 10.240.240.211 down_at 6800 4bac-a56a-6ed44ab74ff0 3089 10.240.240.211 up_thru 6801 last_clean_interval 3089 10.240.240.211

Osd.1 up in weight 1 up_from 59 up_thru 74 down_at 58 last_clean_interval 10.240.240.212 down_at 6800 4b1dabb89339 2696 10.240.240.212 down_at 6801 exists,up 8619c083 2696 10.240.240.212 up_thru 2696 10.240.240.212 down_at 6803 2696 10.240.240.212

Osd.2 up in weight 1 up_from 62 up_thru 74 down_at 61 last_clean_interval 10.240.240.213 down_at 6800 09eb885f0e58 2662 10.240.240.213 up_thru 6801 2662 10.240.240.213 up_thru 6801 2662 10.240.240.213 up_thru 6802 2662 10.240.240.213 up_thru 6803 2662 exists,up f8107c04-4fb8-8c82-09eb885f0e58

[root@client ~] #

View the directory tree of osd

[root@client ~] # ceph osd tree

# id weight type name up/down reweight

-1 3 root default

-2 1 host node1

0 1 osd.0 up 1

-3 1 host node2

1 1 osd.1 up 1

-4 1 host node3

2 1 osd.2 up 1

Osd operation

Down lost an osd hard drive.

[root@node1 ~] # ceph osd down 0 # down drop osd.0 node

Delete an osd hard drive in the cluster

[root@node4 ~] # ceph osd rm 0

Removed osd.0

Delete an osd hard disk crush map in the cluster

[root@node1 ~] # ceph osd crush rm osd.0

Delete a host node of osd in the cluster

[root@node1 ~] # ceph osd crush rm node1

Removed item id-2 name 'node1' from crush map

Check the number of maximum osd

[root@node1 ~] # ceph osd getmaxosd

Max_osd = 4 in epoch 514 # default maximum is 4 osd nodes

Set the maximum number of osd (this value must be expanded when expanding the osd node)

[root@node1 ~] # ceph osd setmaxosd 10

Set the weight of osd crush to 1.0

Ceph osd crush set {id} {weight} [{loc1} [{loc2} …]]

For example:

[root@admin ~] # ceph osd crush set 3 3.0 host=node4

Set item id 3 name 'osd.3' weight 3 at location {host=node4} to crush map

[root@admin ~] # ceph osd tree

# id weight type name up/down reweight

-1 6 root default

-2 1 host node1

0 1 osd.0 up 1

-3 1 host node2

1 1 osd.1 up 1

-4 1 host node3

2 1 osd.2 up 1

-5 3 host node4

3 3 osd.3 up 0.5

Or in the following way

[root@admin ~] # ceph osd crush reweight osd.3 1.0

Reweighted item id 3 name 'osd.3' to 1 in crush map

[root@admin ~] # ceph osd tree

# id weight type name up/down reweight

-1 4 root default

-2 1 host node1

0 1 osd.0 up 1

-3 1 host node2

1 1 osd.1 up 1

-4 1 host node3

2 1 osd.2 up 1

-5 1 host node4

3 1 osd.3 up 0.5

Set the weight of osd

[root@admin ~] # ceph osd reweight 3 0.5

Reweighted osd.3 to 0.5 (8327682)

[root@admin ~] # ceph osd tree

# id weight type name up/down reweight

-1 4 root default

-2 1 host node1

0 1 osd.0 up 1

-3 1 host node2

1 1 osd.1 up 1

-4 1 host node3

2 1 osd.2 up 1

-5 1 host node4

3 1 osd.3 up 0.5

Drive an osd node out of the cluster

[root@admin ~] # ceph osd out osd.3

Marked out osd.3.

[root@admin ~] # ceph osd tree

# id weight type name up/down reweight

-1 4 root default

-2 1 host node1

0 1 osd.0 up 1

-3 1 host node2

1 1 osd.1 up 1

-4 1 host node3

2 1 osd.2 up 1

-5 1 host node4

When the reweight of 3 1 osd.3 up 0 # osd.3 becomes 0, the data is no longer allocated, but the device is still alive.

Join the expelled osd into the cluster

[root@admin ~] # ceph osd in osd.3

Marked in osd.3.

[root@admin ~] # ceph osd tree

# id weight type name up/down reweight

-1 4 root default

-2 1 host node1

0 1 osd.0 up 1

-3 1 host node2

1 1 osd.1 up 1

-4 1 host node3

2 1 osd.2 up 1

-5 1 host node4

3 1 osd.3 up 1

Pause osd (the whole cluster will no longer receive data after pausing)

[root@admin ~] # ceph osd pause

Set pauserd,pausewr

Turn on osd again (receive data again when enabled)

[root@admin ~] # ceph osd unpause

Unset pauserd,pausewr

View the configuration of a cluster osd.2 parameter

Ceph- admin-daemon / var/run/ceph/ceph-osd.2.asok config show | less

PG group PG information

View mapping information for pg groups

[root@client ~] # ceph pg dump

Dumped all in format plain

Version 1164

Stamp 2014-06-30 00 purl 48 purl 29.754714

Last_osdmap_epoch 88

Last_pg_scan 73

Full_ratio 0.95

Nearfull_ratio 0.85

Pg_stat objects mip degr unf bytes log disklog state state_stamp v reported up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrudeep_scrub_stamp

0.3f 39 00 0 0 163577856 128 active+clean 2014-06-30 00 30 active+clean 59.193479 52 2014-26 1954 79 52 2014-26 1915 528 521434

3.3c 00000 active+clean 2014-06-30 00 active+clean 15 purl 38.675465 050 0 88:21 [2jing1] 2 [2jing1] 20'0 2014-06-30 00 Vista 15 Vera 04.295637 0 2014-06-30 00 Fleur 1504.295637

2.3c 000000 active+clean 2014-06-3000 10 purl 48.583702 0pur0 88:46 [2Lei1] 2 [2jinger 1] 20'0 2014-06-29 22 Villa 29 29 Villa 13.701625 0 2014-06-26 1914 52lav 08.845944

1.3f 20 00 452 22 active+clean 2014-06-30 00 purl 10 purl 48.596050 16 pur2 88:66 [2jinger 1] 2 16 purge 2 2014-06-29 22 Villa 28 Vera 03.570074 0 2014-06-26 1952 Vera 52Vera 08.655292

0.3e 31 00 0 130023424 130 active+clean 2014-06-30 00 Velcro 22. 803186 52 0130 8840 0 [2 Magin0] 2 44 Eng 59 2014-06-29 22 22 purl 26 purl 41.317403 0 2014-06-26 1915 552 18978

3.3d 00 000 active+clean 2014-06-30 00 active+clean 16 purl 57.548803 050 0 88:20 [0Jing 2] 0 [0Jing 2] 00'0 2014-06-30 00 Vera 15 Vera 19.101314 0 2014-06-30 00 Flex 1519.101314

2.3f 000000 active+clean 2014-06-3000 active+clean 100.750476 0088purl 106 [0Jing 2] 0 [0Jing 2] 00'0 2014-06-29 2222 purl 27purl 44.60408440800002014-06-2619lv 52purl 08.864240

1.3c 100 00 01 1 active+clean 2014-06-30 00 10 purse 48.939358 16 purge 1 88:66 [1Jing 2] 1 16 purge 1 2014-06-29 22 purge 27 purl 35.991845 0mm 2014-06-26 1952 purl 08.646470

0.3 d 34 00 0 142606336 149 active+clean 2014-06-30 0014 348 657 52 hundred and forty-nine hundred and forty-nine three hundred and forty-nine three hundred and forty-eight three hundred and forty-nine hundred and ninety three hundred and forty-eight three hundred and ninety eight three hundred and ninety four hundred and forty seven 2014-06-29 2222.

3.3e 00 000 active+clean 2014-06-30 00 Virgo 15purl 39.554742 20'0 2014-06-30 00 Flex 04.296812 02014-06-30 00 Fleur 1504.296812 [2 memorie 1] 2 [2 memorie 1] 2 [2 memorials 1] 2 [2 memorials 1] 20'0 0000-06-30 00 Flex 04.296812

2.3e 00000 active+clean 2014-06-30 00 10 purse 48.592171 0508 88:46 [2memen1] 2 [2jing1] 20'0 2014-06-29 22 22 Vuit.2914 702209 0 2014-06-26 1915 52purl 08.855382

1.3D 00000 active+clean 2014-06-30 00 10 purse 48.938971 0100 0 88:58 [1Jing 2] 1 [1jue 2] 10'0 2014-06-29 22 Vera 27 Vera 36.91820 2014-06-26 1914 52lane 08.650070

0.3c 41 00 0 171966464 15715 active+clean 2014-06-30 00157251252 52V 15788purl 385 [1jue 0] 1 [1jue 0] 1 444th 2014-06-29 22purl 26purl 34.829858 015798-06-26 19Vera 52lane 08.513798

3.3f 00 000 active+clean 2014-06-30 00 active+clean 1714 08.416756 050 0 88:20 [0prime1] 0 [0meme1] 00'0 0 2014-06-30 00 00 14 00 15 15 40 6120 0 2014-06-30 00 15 19 406120

2.39 000 000 0 active+clean 2014-06-30 00 active+clean 10 purl 58.784789 0pur0 88:71 [2 memen0] 2 [2 meme0] 20'0 2014-06-29 22 22 Freud 29 29 22 purl 10.673549 0 2014-06-26 1914 514 08.834644

1.3a 00000 active+clean 2014-06-3000 10 58.738782 020 088 purl 106 [0Jing 2] 0 [0Jing 2] 00'0 2014-06-29 22 22 Virgo 26 Vista 29.457318 050 2014-06-26 1914 52 purl 08.642018

0.3b 37 00 0 155189248 137 active+clean 2014-06-30 00 active+clean 28 purl 45.021993 5213788purl 278 [0Magne2] 0 444th 400.06-29 2222 purl 2515 22.275783 01400 0 2014-06-26 1915 552 08.510502

3.38 00 000 active+clean 2014-06-30 00 active+clean 1615 13.222339 020 88:21 [1 memery 0] 1 [1 recorder 0] 1 0 2014-06-30 00 15 Swiss 05.446639 00 2014-06-30 00 15 Swiss 05.446639

2.38 000 000 0 active+clean 2014-06-30 00 active+clean 10 purses 58.783103 0504 88:71 [2mem0] 2 [2memery 0] 20'0 2014-06-29 22 22 Frees 29 29 26 06.688363 02014-06-26 1915 52lane 08.827342

1.3b 0000000 active+clean 2014-06-3000 10 purl 58.857283 0pur0 88:78 [1Jing 0] 1 [1jue 0] 10'0 2014-06-29 22 22 Villa 27 Vera 30.017050 050 2014-06-26 1952 Fran 8.644820

0.3a 40000 167772160 149149 active+clean 2014-06-3000 active+clean 2814 7002342 5214988148288 [0Magne2] 0 [0Jing 2] 0444th 2014-06-29 2222 purse 2521.273679 014906-26 1915 515 08.508654

3.39 00 000 active+clean 2014-06-30 00 active+clean 1615 13.255056 050 0 88:21 [1 memery 0] 1 [1 recorder 0] 1 0 2014-06-30 00 15 Swiss 05.447461 0 2014-06-30 00

2.3b 000000 active+clean 2014-06-3000 10 purse 48.935872 0pur0 88:57 [1Jing 2] 1 [1JEI 2] 10'0 2014-06-29 22 22 Vera 28 purge 35.095977 0mm 0 2014-06-26 1914 52Vol 8.844571

1.38 000 000 0 active+clean 2014-06-300 00 active+clean 10 shuttles 48.5975400 0508 88:46 [2 memen1] 2 [2 memorials 1] 20'0 2014-06-29 22 22 Frances 28 Frances 01.519137 0 2014-06-26 1915 515 540 08.633781

0.39 48 00 0 201326592 164164 active+clean 0-06-3000 Velcro 2515 30.757843 52mm 16488 pur.432 [1 Magne0] 1 444th 32 2014-06-29 22 22 purl 33.823947 0 2014-06-26 1915 52lane 08.504628

The following section is omitted

View the map of a PG

[root@client ~] # ceph pg map 0.3f

Osdmap e88 pg 0.3f (0.3f)-> up [0jue 2] acting [0Jue 2] # where [0Jue 2] represents storage on osd.0 and osd.2 nodes, and osd.0 represents the storage location of the master replica.

View PG status

[root@client ~] # ceph pg stat

V1164: 448 pgs: 448 active+clean; 10003 MB data, 23617 MB used, 37792 MB / 61410 MB avail

Query the details of a pg

[root@client ~] # ceph pg 0.26 query

View the status of stuck in pg

[root@client ~] # ceph pg dump_stuck unclean

Ok

[root@client ~] # ceph pg dump_stuck inactive

Ok

[root@client ~] # ceph pg dump_stuck stale

Ok

Show all pg statistics in a cluster

Ceph pg dump-format plain

Pg operation

Recover a missing pg

Ceph pg {pg-id} mark_unfound_lost revert

Show pg in abnormal statu

Ceph pg dump_stuck inactive | unclean | stale

Pool View pool Information

Check the number of pool in the ceph cluster

[root@admin ~] # ceph osd lspools

0 data,1 metadata,2 rbd

Create a pool in the ceph cluster

Ceph osd pool create jiayuan 100 # the 100 here refers to the PG group

Configure quotas for an ceph pool

Ceph osd pool set-quota data max_objects 10000

Delete a pool in the cluster

Ceph osd pool delete jiayuan jiayuan-yes-i-really-really-mean-it

# Cluster name needs to be repeated twice

Displays the details of the pool in the cluster

[root@admin ~] # rados df

Pool name category KB objects clones degraded unfound rd rd KB wr wr KB

Data-475764704 116155 000 116379 475764704

Metadata-5606 21 000 0 0314 5833

Rbd-0 000 000 0 0 0

Total used 955852448 116176

Total avail 639497596

Total space 1595350044

[root@admin ~] #

Pool operation

Create a snapshot of a pool

[root@admin ~] # ceph osd pool mksnap data date-snap

Created pool data snap date-snap

Delete a snapshot of a pool

[root@admin ~] # ceph osd pool rmsnap data date-snap

Removed pool data snap date-snap

View the number of pg of the data pool

[root@admin ~] # ceph osd pool get data pg_num

Pg_num: 64

Set the maximum storage space of the data pool to 100T (default is 1T)

[root@admin ~] # ceph osd pool set data target_max_bytes 100000000000000

Set pool 0 target_max_bytes to 100000000000000

Set the number of replicas for the data pool to 3

[root@admin ~] # ceph osd pool set data size 3

Set pool 0 size to 3

Set the minimum copy of the data pool that can accept write operations to 2

[root@admin ~] # ceph osd pool set data min_size 2

Set pool 0 min_size to 2

View copy sizes of all pool in the cluster

[root@admin mycephfs] # ceph osd dump | grep 'replicated size'

Pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 26 owner 0 flags hashpspool crash_replay_interval 45 target_bytes 100000000000000 stripe_width 0

Pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool stripe_width 0

Pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool stripe_width 0

Set the number of pg for a pool

[root@admin ~] # ceph osd pool set data pg_num 100

Set pool 0 pg_num to 100

Set the number of pgp for a pool

[root@admin ~] # ceph osd pool set data pgp_num 100

Set pool 0 pgp_num to 100

Rados instruction View

See how many pool there are in the ceph cluster (just check the pool)

[root@node-44 ~] # rados lspools

Data

Metadata

Rbd

Images

Volumes

.rgw.root

Compute

.rgw.control

.rgw

.rgw.gc

.users.uid

Check how many pool there are in the ceph cluster, and the capacity and utilization of each pool

[root@node-44 ~] # rados df

Pool name category KB objects clones degraded unfound rd rd KB wr wr KB

.rgw-0 000 000 0 0 0

.rgw.control-0 8 000 000 0 0 0

.rgw.gc-0 32000 0 57172 57172 38136 0

.rgw.root-1 400 0 0 75 46 10 10

.users.uid-1 1 000 0 0 2 1

Compute-67430708 16506 398128 75927848 1174683 222082706

Data-0 000 000 0 0 0

Images-250069744 30683 50881 195328724 65025 388375482

Metadata-0 000 000 0 0 0

Rbd-0 000 000 0 0 0

Volumes-79123929 19707 2575693 63437000 1592456 163812172

Total used 799318844 66941

Total avail 11306053720

Total space 12105372564

[root@node-44 ~] #

Create

Create a pool

[root@node-44 ~] # rados mkpool test

Check the ceph object in ceph pool (where the object is stored in blocks)

[root@node-44 ~] # rados ls-p volumes | more

Rbd_data.348f21ba7021.0000000000000866

Rbd_data.32562ae8944a.0000000000000c79

Rbd_data.589c2ae8944a.00000000000031ba

Rbd_data.58c9151ff76b.00000000000029af

Rbd_data.58c9151ff76b.0000000000002c19

Rbd_data.58c9151ff76b.0000000000000a5a

Rbd_data.58c9151ff76b.0000000000001c69

Rbd_data.58c9151ff76b.000000000000281d

Rbd_data.58c9151ff76b.0000000000002de1

Rbd_data.58c9151ff76b.0000000000002dae

Create an object object

[root@admin-node] # rados create test-object-p test

[root@admin-node ~] # rados-p test ls

Test-object

Delete

Delete an object

[root@admin-node] # rados rm test-object-1-p test

View the usage of the rbd command

View all the images in a pool in ceph

[root@node-44 ~] # rbd ls images

2014-05-24 17 0x6c5400 sd=3 17 7f14caa6e700 37.043659 7f14caa6e700 0 -: / 1025604 > > 10.49.101.9 0x6c5400 sd=3 0x6c5400 sd=3: 0 pgs=0 cs=0 lager 1 c=0x6c5660).

2182d9ac-52f4-4f5d-99a1-ab3ceacbf0b9

34e1a475-5b11-410c-b4c4-69b5f780f03c

476a9f3b-4608-4ffd-90ea-8750e804f46e

60eae8bf-dd23-40c5-ba02-266d5b942767

72e16e93-1fa5-4e11-8497-15bd904eeffe

74cb427c-cee9-47d0-b467-af217a67e60a

8f181a53-520b-4e22-af7c-de59e8ccca78

9867a580-22fe-4ed0-a1a8-120b8e8d18f4

Ac6f4dae-4b81-476d-9e83-ad92ff25fb13

D20206d7-ff31-4dce-b59a-a622b0ea3af6

[root@node-44 ~] # rbd ls volumes

2014-05-24 17 0x96a400 sd=3 2218.649929 7f9e98733700 0 -: / 1010725 > 10.49.101.9 pgs=0 cs=0 6789 c=0x96a660 0 pipe (0x96a400 sd=3: 0 c=0x96a660). Fault

Volume-0788fc6c-0dd4-4339-bad4-e9d78bd5365c

Volume-0898c5b4-4072-4cae-affc-ec59c2375c51

Volume-2a1fb287-5666-4095-8f0b-6481695824e2

Volume-35c6aad4-8ea4-4b8d-95c7-7c3a8e8758c5

Volume-814494cc-5ae6-4094-9d06-d844fdf485c4

Volume-8a6fb0db-35a9-4b3b-9ace-fb647c2918ea

Volume-8c108991-9b03-4308-b979-51378bba2ed1

Volume-8cf3d206-2cce-4579-91c5-77bcb4a8a3f8

Volume-91fc075c-8bd1-41dc-b5ef-844f23df177d

Volume-b1263d8b-0a12-4b51-84e5-74434c0e73aa

Volume-b84fad5d-16ee-4343-8630-88f265409feb

Volume-c03a2eb1-06a3-4d79-98e5-7c62210751c3

Volume-c17bf6c0-80ba-47d9-862d-1b9e9a48231e

Volume-c32bca55-7ec0-47ce-a87e-a883da4b4ccd

Volume-df8961ef-11d6-4dae-96ee-f2df8eb4a08c

Volume-f1c38695-81f8-44fd-9af0-458cddf103a3

View the information of a mirror in ceph pool

[root@node-44] # rbd info-p images-image 74cb427c-cee9-47d0-b467-af217a67e60a

Rbd image '74cb427c-cee9-47d0murb467Muraf217a67e60aboat:

Size 1048 MB in 131 objects

Order 23 (8192 KB objects)

Block_name_prefix: rbd_data.95c7783fc0d0

Format: 2

Features: layering

Mirror operation

Create a 10000m mirror named zhanguo in the test pool

[root@node-44] # rbd create-p test-size 10000 zhanguo

[root@node-44 ~] # rbd-p test info zhanguo # View the information of the newly created image

Rbd image 'zhanguo':

Size 10000 MB in 2500 objects

Order 22 (4096 KB objects)

Block_name_prefix: rb.0.127d2.2ae8944a

Format: 1

[root@node-44 ~] #

Delete a mirror image

[root@node-44 ~] # rbd rm-p test lizhanguo

Removing image: 100% complete... Done.

Resize a mirror image

[root@node-44] # rbd resize-p test-size 20000 zhanguo

Resizing image: 100% complete... Done.

[root@node-44 ~] # rbd-p test info zhanguo # adjusted image size

Rbd image 'zhanguo':

Size 20000 MB in 5000 objects

Order 22 (4096 KB objects)

Block_name_prefix: rb.0.127d2.2ae8944a

Format: 1

[root@node-44 ~] #

Snapshot operation

Create a snapshot of a mirror

[root@node-44 ~] # rbd snap create test/zhanguo@zhanguo123 # Pool / Image @ Snapshot

[root@node-44 ~] # rbd snap ls-p test zhanguo

SNAPID NAME SIZE

2 zhanguo123 20000 MB

[root@node-44 ~] #

[root@node-44 ~] # rbd info test/zhanguo@zhanguo123

Rbd image 'zhanguo':

Size 20000 MB in 5000 objects

Order 22 (4096 KB objects)

Block_name_prefix: rb.0.127d2.2ae8944a

Format: 1

Protected: False

[root@node-44 ~] #

View a snapshot of a mirror file

[root@os-node101] # rbd snap ls-p volumes volume-7687988d-16ef-4814-8a2c-3fbd85e928e4

SNAPID NAME SIZE

5 snapshot-ee7862aa-825e-4004-9587-879d60430a12 102400 MB

Delete a snapshot of an image file

[root@os-node101] # rbd snap rm volumes/volume-7687988d-16ef-4814-8a2c-3fbd85e928e4@snapshot-ee7862aa-825e-4004-9587-879d60430a12

Rbd: snapshot 'snapshot-60586eba-b0be-4885-81abmuri 010757e50efb' is protected from removal.

2014-08-18 19 removing snapshot from header failed 2323 42.099301 7fd0245ef760-1 librbd: (16) Device or resource busy

The error message displayed above cannot be deleted is that this snapshot backup is write protected. The following command is to delete write protection before deleting it.

[root@os-node101] # rbd snap unprotect volumes/volume-7687988d-16ef-4814-8a2c-3fbd85e928e4@snapshot-ee7862aa-825e-4004-9587-879d60430a12

[root@os-node101] # rbd snap rm volumes/volume-7687988d-16ef-4814-8a2c-3fbd85e928e4@snapshot-ee7862aa-825e-4004-9587-879d60430a12

Delete all snapshots of a mirror file

[root@os-node101] # rbd snap purge-p volumes volume-7687988d-16ef-4814-8a2c-3fbd85e928e4

Removing all snapshots: 100% complete... Done.

Export an image in ceph pool

Export Mirror

[root@node-44] # rbd export-p images-image 74cb427c-cee9-47d0-b467-af217a67e60a / root/aaa.img

2014-05-24 17 0x1368400 sd=3 16 7ffb47a9a700 15.197695 7ffb47a9a700 0 -: / 1020493 > > 10.49.101.9 0x1368400 sd=3 6789 c=0x1368660 0 pipe (0x1368400 sd=3: 0 c=0x1368660). Fault

Exporting image: 100% complete... Done.

Export cloud disk

[root@node-44] # rbd export-p volumes-image volume-470fee37-b950-4eef-a595-d7def334a5d6 / var/lib/glance/ceph-pool/volumes/Message-JiaoBenJi-10.40.212.24

2014-05-24 17 0x260a400 sd=3 28 7f14ad39f700 18.940402 7f14ad39f700 0 -: / 1032237 > 10.49.101.9 0x260a400 sd=3 0x260a400 sd=3: 0 pgs=0 cs=0 lager 1 c=0x260a660.

Exporting image: 100% complete... Done.

Import an image into ceph (but direct import is not available because it cannot be seen without openstack,openstack)

[root@node-44] # rbd import / root/aaa.img-p images-image 74cb427c-cee9-47d0-b467-af217a67e60a

Importing image: 100% complete... Done.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report