Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Steps of building and configuring Ceph

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains the steps of building and configuring Ceph. The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn the steps of building and configuring Ceph.

Notes on Ceph Construction and configuration

Platform: VirtualBox 4.3.12

Virtual machine: CentOS 6.5Linux 2.6.32-504.3.3.el6.x86_64

(1) preparatory work

Modify hostname configuration IP

Note: the following steps need to configure store01 and store02 respectively as appropriate.

[root@store01 ~] # vim / etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0TYPE=EthernetUUID=82e3956c-6850-426a-afd7-977a26a77dabOBOOTROLLEDPROTOGATEWAY0GATEWAY0192.168.1.1HWADDRO 08UR 2700RV 65BD DEFROTEYEYEER 4BDDDEFROTEYEYEER IPV4FAILUREEATALALYesIPV6INITNAME = "System eth0" [root@store01] # ifconfig eth0eth0 Link encap:Ethernet HWaddr 08:00:27:65:4B:DD inet addr:192.168.1.179 Bcast:192.168.127.255 Mask: 255.255.128.0 inet6 addr: fe80::a00:27ff:fe65:4bdd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75576 errors:0 dropped:0 overruns:0 frame:0 TX packets:41422 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:88133010 (84.0 MiB) TX bytes:4529474 (4.3 MiB) [ Root@store01 ~] # cat / etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.179 store01192.168.1.190 store02

Configure NTP time synchronization

[root@store01 ~] # yum install ntp ntpdate [root@store01 ~] # service ntpd startStarting ntpd: [OK] [root@store01 ~] # chkconfig ntpd on [root@store01 ~] # netstat-tunlp | grep 123udp 0 0 192.168.1.179 root@store01 123 0.0.0.0 root@store01 * 12254 / Ntpd udp 0 0127.0.0.0.112254/ntpd udp * 12254/ntpd udp 0 0127.0.0.0.1VO12 0.0.0.0 * 12254/ntpd udp 00 fe80::a00:27ff:fe65:4bdd:123: * 12254/ntpd udp 00:: 1 12254/ntpd udp 123: 123 :: * 12254/ntpd [root@store01] # ntpq-p remote refid st t when poll reach delay offset jitter====+gus.buptnet.edu 202.112.31.197 3 u 7 64 377 115.339 4.700 46.105*dns2.synet.edu. 202.118.1.46 2 u 69 64 373 44.619 1.680 6.667 [root@store02] # yum install ntp ntpdate [root@store02] # vim / etc/ntp.conf server store01 iburst [root@store02] # service ntpd startStarting ntpd: [OK] [root@store02] # chkconfig ntpd on [root@store02] # ntpq-p remote Refid st t when poll reach delay offset jitter==== store01 202.112.10.36 4 u 56 64 1 0.412 0.354 0.000 [root@store02 ~] # netstat-tunlp | grep 123udp 00 192.168.1.190 netstat 123 0.0.0.0 * 12971/ntpd udp 0 0 127.0.0.1 12971/ntpd udp 123 0.0.0.0 fe80::a00:27ff * 12971/ntpd udp 00 0.0.0.0 fe80::a00:27ff: Fead:71b:123: * 12971/ntpd udp 0 0:: 1 12971/ntpd udp 0 0: * 12971/ntpd

Close SELinux IPTables

[root@store01 ~] # / etc/init.d/iptables stopiptables: Setting chains to policy ACCEPT: filter [OK] iptables: Flushing firewall rules: [OK] iptables: Unloading modules: [OK] [root@store01 ~] # / etc/init.d/ip6tables stopip6tables: Setting chains to policy ACCEPT: filter [OK] ip6tables: Flushing firewall rules: [OK] ip6tables: Unloading modules: [OK] [root@store01 ~] # chkconfig iptables off [root@store01 ~] # chkconfig ip6tables off [root@store01 ~] # setenforce 0 [root@store01] # vim / etc/selinux/config # This file controls the state of SELinux on the system.# SELINUX= can take one of these three values:# enforcing-SELinux security policy is Enforced.# permissive-SELinux prints warnings instead of enforcing.# disabled-No SELinux policy is loaded.SELINUX=disabled# SELINUXTYPE= can take one of these two values:# targeted-Targeted processes are protected # mls-Multi Level Security protection.

Set up root user ssh password-free access (refer to another post on this blog)

(2) install Ceph

Add Source (Ceph Version:0.72)

# vim / etc/yum.repos.d/ ceph.repos [ceph] name=Ceph packages for $basearchbaseurl= http://ceph.com/rpm-emperor/el6/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-noarch]name=Ceph noarch packagesbaseurl= http://ceph.com/rpm-emperor/el6/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain; F=keys/release.asc [ceph-source] name=Ceph source packagesbaseurl= http://ceph.com/rpm-emperor/el6/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Install Ceph

[root@store01 ~] # yum install ceph ceph-deploy [root@store01 ~] # ceph-deploy-- version1.5.11 [root@store01 ~] # ceph--versionceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60) [root@store02 ~] # yum install ceph (3) configure Ceph [root@store01 ~] # mkdir my-cluster [root@store01 ~] # cd my-cluster/ [root@store01 my-cluster] # ls [root@store01 my-cluster] # ceph-deploy new Store 01 [ceph _ deploy.conf] [DEBUG] found configuration file at: / root/ .cephdeploy.conf[ ceph _ deploy.cli] [INFO] Invoked (1.5.11): / usr/bin/ceph-deploy new store01 [ceph _ deploy.new] [DEBUG] Creating new cluster named CEO [ceph _ deploy.new] [DEBUG] Resolving host store01 [ceph _ deploy.new] [DEBUG] Monitor store01 at 192.168.1.179 [ceph _ deploy.new] [INFO] making sure passwordless SSH succeeded [ceph _ deploy.new] [DEBUG ] Monitor initial members are ['store01'] [ceph_deploy.new] [DEBUG] Monitor addrs are [' 192.168.1.179'] [ceph_deploy.new] [DEBUG] Creating a random mon key. [ceph _ deploy.new] [DEBUG] Writing initial config to ceph.conf..[ ceph _ deploy.new] [DEBUG] Writing monitor keyring to ceph.mon.keyring... [root@store01 my-cluster] # lsceph.conf ceph.log ceph.mon.keyring [root@store01 my- Cluster] # cat ceph.conf [global] auth_service_required = cephxfilestore_xattr_use_omap = trueauth_client_required = cephxauth_cluster_required = cephxmon_host = 192.168.1.179mon_initial_members = store01fsid = b45a03be-3abf-4736-8475-f238e1f2f479 [root@store01 my-cluster] # vim ceph.conf [global] auth_service_required = cephxfilestore_xattr_use_omap = trueauth_client_required = cephxauth_cluster_required = cephxmon_host = 192.168.1.179mon_initial_members = store01fsid = b45a03be-3abf-4736-8475-f238e1f2f479osd pool default size = 2 [root@store01 my-cluster] # ceph-deploy mon create-initial [root@store01 my-cluster] # lltotal 28 Dec RW Dec 29 10:34 ceph.bootstrap-mds.keyring-rw-r--r-- 1 root root 72 Dec 29 10:34 ceph.bootstrap-osd.keyring-rw-r--r-- 1 root root 64 Dec 29 10:34 ceph.client.admin .keyring-rw-r--r-- 1 root root 257 Dec 29 10:34 ceph.conf-rw-r--r-- 1 root root 5783 Dec 29 10:34 ceph.log-rw-r--r-- 1 root root 73 Dec 29 10:33 ceph.mon.keyring [root@store01 my-cluster] # ceph-deploy disk list store01 store02 [ceph _ deploy.conf] [DEBUG] found configuration file at: / root/ .cephdeploy.confession [ceph _ deploy.cli] [INFO] Invoked ( 1.5.11): / usr/bin/ceph-deploy disk list store01 store02 [store01] [DEBUG] connected to host: store01 [store01] [DEBUG] detect platform information from remote host [store01] [DEBUG] detect machine type [ceph_deploy.osd] [INFO] Distro info: CentOS 6.6 Final [ceph _ deploy.osd] [DEBUG] Listing disks on store01... [store01] [DEBUG] find the location of an executable [store01] [INFO] Running command: / usr/sbin/ceph-disk list [store01] [DEBUG] / dev/sda: [store01] [DEBUG] / dev/sda1 other Ext4, mounted on / boot [store01] [DEBUG] / dev/sda2 other, LVM2_ member[store01] [DEBUG] / dev/sdb other, unknown [store01] [DEBUG] / dev/sdc other, unknown [store01] [DEBUG] / dev/sr0 other Unknown [store02] [DEBUG] connected to host: store02 [store02] [DEBUG] detect platform information from remote host [store02] [DEBUG] detect machine type[ceph _ deploy.osd] [INFO] Distro info: CentOS 6.6 Final[ceph _ deploy.osd] [DEBUG] Listing disks on store02... [store02] [DEBUG] find the location of an executable [store02] [INFO] Running command: / usr/sbin/ceph-disk list [store02] [DEBUG] / dev/sda: [store02] [DEBUG] / dev/sda1 other, ext4 Mounted on / boot [store02] [DEBUG] / dev/sda2 other, LVM2_ member[store02] [DEBUG] / dev/sdb other, unknown [store02] [DEBUG] / dev/sdc other, unknown [store02] [DEBUG] / dev/sr0 other, unknown [root@store01 my-cluster] # ceph-deploy disk zap store01:sd {b,c} [root@store01 my-cluster] # ceph-deploy disk zap store02:sd {b,c} [root@store01 my-cluster] # ceph-deploy osd create store01:sd {b C} [root@store01 my-cluster] # ceph-deploy osd create store02:sd {bjord c} [root@store01 my-cluster] # ceph status cluster e5c2f7f3-2c8a-4ae0-af26-ab0cf5f67343 health HEALTH_OK monmap e1: 1 mons at {store01=192.168.1.179:6789/0}, election epoch 1, quorum 0 store01 osdmap e18: 4 osds: 4 up, 4 in pgmap v28: 192 pgs, 3 pools, 0 bytes data, 0 objects 136 MB used 107 GB / 107 GB avail 192 active+clean [root@store01 my-cluster] # ceph osd tree# id weight type name up/down reweight-1 0.12 root default-2 0.06 host store011 0.03 osd.1 up 10 0.03 osd.0 up 1-3 0.06 host store023 0.03 osd.3 up 12 0.03 osd.2 up 1 (4) create pool and user format: ceph osd pool set {pool-name} pg_num Note: pg_num Select Standard Less than 5 OSDs set pg_num to 128Between 5 and 10 OSDs set pg_num to 512Between 10 and 50 OSDs set pg_num to 4096If you have more than 50 OSDs You need to understand the tradeoffs and how to calculate the pg_num value by yourself [root@store01 my-cluster] # ceph osd pool create volumes 128pool 'volumes' created [root@store01 my-cluster] # ceph osd pool create images 128pool' images' created [root@store01 my-cluster] # ceph osd lspools0 data,1 metadata,2 rbd,3 volumes,4 images,ceph auth get-or-create client.cinder mon 'allow r'osd' allow class-read object_prefix rbd_children, allow rwx pool=volumes Allow rx pool=images'ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

At this point, the Ceph is configured.

You can configure this Ceph as a backend in OpenStack's Cinder, Nova, and Glance services.

Thank you for reading, the above is the content of "the steps of building and configuring Ceph". After the study of this article, I believe you have a deeper understanding of the steps of building and configuring Ceph, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report