In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to install Ceph L quickly. It is very detailed and has certain reference value. Friends who are interested must finish reading it.
Ceph role assignment 172.31.68.241admin-node/deph-deploy/mon/mgr/mds/rgw172.31.68.242osd.0/mon172.31.68.243osd.1/mon configuration ssh password-less login
Admin-node needs to be able to log in to the osd machine with ssh without a password. If you are an ordinary user, you need to assign sudo permission, as follows:
Useradd-d / home/cephadmin-m cephadminecho "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee / etc/sudoers.d/cephadminchmod 0440 / etc/sudoers.d/cephadmin Quick install ceph-deploy new ceph2ceph-deploy install ceph2 ceph3 ceph4ceph-deploy-- overwrite-conf mon create-initialceph-deploy admin ceph2 ceph3 ceph4ceph-deploy mgr create ceph2ceph-deploy osd create--data / dev/vdb1 ceph3ceph-deploy osd create--data / dev/vdb1 ceph4ceph healthceph-s extension ceph-deploy mds create ceph2ceph-deploy mon add ceph3ceph-deploy mon add ceph4ceph quorum_status-- format json-pretty
Install rgwceph-deploy rgw create ceph2 to adjust the configuration [client.rgw.ceph2] rgw_frontends = "civetweb port=8080" ceph-deploy-- overwrite-conf admin ceph2 ceph3 ceph4 ceph5systemctl restart ceph-radosgw@rgw.ceph2.servicecurl http://172.31.68.241:8080-I simulate clientapt-get install cephceph-deploy admin ceph5
Object storage echo 'hello ceph oject storage' > testfile.txt create poolceph osd pool create mytest 8 upload file rados put test-object-1 testfile.txt-- pool=mytestrados-p mytest ls get file rados get test-object-1 testfile.txt.1-- pool=mytest view mapping location ceph osd map mytest test-object-1 delete file rados rm test-object-1-- pool=mytest delete poolceph osd pool rm mytest mytest-- execute on yes-i-really-really-mean-it block storage admin : execution on ceph osd pool create rdb 8admin: execution on rbd pool init rdbadmin: rbd create foo-- size 512-- image-feature layering-p rdbrbd map foo-- name client.admin-p rdbcephfsadmin:ceph osd pool create cephfs_data 4admin:ceph osd pool create cephfs_metadata 4admin:ceph osd lspoolsadmin:ceph fs new cephfs cephfs_metadata cephfs_dataadmin.secret content is part of ceph.client.admin.keyring content AQDhRX1baLeFFxAAskNapEuyipJ7SqS7Q1mh/Q== kernel-level mount method: mkdir / mnt/mycephfsmount-t ceph 172.31 .68.241: 6789172.31.68.242 mnt/mycephfs 6789172.31.68.243 Secretfile=admin.secret experiment cd / mnt/mycephfsecho 'hello ceph CephFS' > hello.txtcd ~ Uninstall umount-lf / mnt/mycephfsrm-rf / mnt/mycephfs user-level mount: mkdir / mnt/mycephfsceph-fuse-m 172.31.68.241RGW 8 8ceph osd pool create / mnt/mycephfsS3 storage ceph osd pool create .rgw 8 8ceph osd pool create .rgw.control 8 8ceph osd pool create .rgw.gc 8 8ceph osd pool create .rgw.buckets 8 8ceph osd pool create .rgw.bu ckets.index 8 8ceph osd pool create .rgw.buckets.room8 8ceph osd pool create .log 8 8ceph osd pool create. Intent-log 8 8ceph osd pool create .usage 8 8ceph osd pool create .users 8 8ceph osd pool create .users.e mail 8 8ceph osd pool create .users.swift 8 8ceph osd pool create .users.uid 8 8vm plug-in disk cd / opt/vm/data_imageqemu-img create-f qcow2 ubuntu16.04-2-data.img 2Gqemu-img create-f qcow2 ubuntu16.04-3-data.img 2Gvirsh attach-disk [-- Domin] $DOMIN [--source] $SOURCEFILE [--target] $TARGET-- subdriver qcow2-- config-- livevirsh attach-disk Ubuntu16.04-2 / opt/vm/data_image/ubuntu16.04-2-data.img vdb-- subdriver qcow2virsh attach-disk Ubuntu16.04-3 / opt/vm/data_image/ubuntu16.04-3-data.img vdb-- subdriver qcow2 clear installation package ceph-deploy purge ceph2 ceph3 ceph4 clear configuration information ceph-deploy purgedata ceph2 ceph3 ceph4ceph-deploy forgetkeys each Node deletes the residual configuration file rm-rf / var/lib/ceph/osd/*rm-rf / var/lib/ceph/mon/*rm-rf / var/lib/ceph/mds/*rm-rf / var/lib/ceph/bootstrap-mds/*rm-rf / var/lib/ceph/bootstrap-osd/*rm-rf / var/lib/ceph/bootstrap-mon/*rm-rf / var/lib/ceph/tmp/*rm-rf / etc/ceph/*rm-rf / var/run/ceph/* clear lvm configuration vgscan vgdisplay-vlvremovevgremovepvremove above is all the content of the article "how to quickly install Ceph L" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.