In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the installation methods of distributed storage Ceph. It is very detailed and has a certain reference value. Friends who are interested must read it!
1. Source code installation
Description: source code installation can learn about the various components of the system, but the installation process is also very laborious, mainly relying on too many packages. At that time, I tried to install it on centos and ubuntu, and it can be installed.
1 download ceph http://ceph.com/download/
Wget http://ceph.com/download/ceph-0.72.tar.gz
2 install the compilation tool apt-get install automake autoconf automake libtool make
3 decompression
# tar zxvf ceph-0.72.tar.gz
# cd ceph-0.72.tar.gz
#. / autogen.sh
4 、
Install the dependency package first
# apt-get install autotools-dev autoconf automake cdbs gaming + gcc git libatomic-ops-dev libboost-dev\
Libcrypto++-dev libcrypto++ libedit-dev libexpat1-dev libfcgi-dev libfuse-dev\
Libgoogle-perftools-dev libgtkmm-2.4-dev libtool pkg-config uuid-dev libkeyutils-dev\
Uuid-dev libkeyutils-dev btrfs-tools
4 errors may be encountered
4.1 fuse:
Apt-get install fuse-devel
4.2 tcmalloc:
Wget https://gperftools.googlecode.com/files/gperftools-2.1.zip
Install google-perftools
4.3 libedit:
Apt-get install libedit-devel
4.4 no libatomic-ops found
Apt-get install libatomic_ops-devel
4.5 snappy:
Https://snappy.googlecode.com/files/snappy-1.1.1.tar.gz
4.6 libleveldb not found:
Https://leveldb.googlecode.com/files/leveldb-1.14.0.tar.gz
Make
Cp libleveldb.* / usr/lib
Cp-r include/leveldb / usr/local/include
4.7 libaio
Apt-get install libaio-dev
4.8 boost
Apt-get install libboost-dev
Apt-get install libboost-thread-dev
Apt-get install libboost-program-options-dev
4.9 gags +
Apt-get install gathers +
5 compile and install
#. / configure-prefix=/opt/ceph/
# make
# make install
Second, the ceph version that comes with ubuntn 12.04 may be ceph version 0.41.
Resources:
Two machines: one server, one client, install ubuntu12.04
Among them, when server is installed, it is divided into two areas as the storage of osd0 and osd1. If not, after the system is installed, it is also possible to use loop devices to simulate two areas.
1. Install CEPH (MON, MDS, OSD) on the server
Apt-cache search ceph
Apt-get install ceph
Apt-get install ceph-common
2. Add key to APT, update sources.list, and install ceph
Wget-Q-O-'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add-
Echo deb http://ceph.com/debian/ $(lsb_release-sc) main | sudo tee / etc/apt/sources.list.d/ceph.list
Apt-get update & & sudo apt-get install ceph
3. View the version
# ceph-v / / the version of ceph and key information will be displayed
If it is not displayed, execute the following command
# sudo apt-get update & & apt-get upgrade
4. Configuration file
# vim / etc/ceph/ceph.conf
[global]
# For version 0.55 and beyond, you must explicitly enable
# or disable authentication with "auth" entries in [global].
Auth cluster required = none
Auth service required = none
Auth client required = none
[osd]
Osd journal size = 1000
# The following assumes ext4 filesystem.
Filestore xattr use omap = true
# For Bobtail (v0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following settings and replace the values
# in braces with appropriate values, or leave the following settings
# commented out to accept the default values. You must specify the
#-mkfs option with mkcephfs in order for the deployment script to
# utilize the following settings, and you must define the 'devs'
# option for each osd instance; see below.
Osd mkfs type = xfs
Osd mkfs options xfs =-f # default for xfs is "- f"
Osd mount options xfs = rw,noatime # default mount option is "rw,noatime"
# For example, for ext4, the mount option might look like this:
# osd mkfs options ext4 = user_xattr,rw,noatime
# Execute $hostname to retrieve the name of your host
# and replace {hostname} with the name of your host.
# For the monitor, replace {ip-address} with the IP
# address of your host.
[mon.a]
Host = ceph2
Mon addr = 192.168.1.1pur6789
[osd.0]
Host = ceph2
# For Bobtail (v0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following setting for each OSD and specify
# a path to the device if you use mkcephfs with the-mkfs option.
Devs = / dev/sdb1
[mds.a]
Host = ceph2
5. Perform initialization
Sudo mkcephfs-a-c / etc/ceph/ceph.conf-k / etc/ceph/ceph.keyring
Note that the original data directory needs to be deleted for each initialization.
Rm-rf / var/lib/ceph/osd/ceph-0/*
Rm-rf / var/lib/ceph/osd/ceph-1/*
Rm-rf / var/lib/ceph/mon/ceph-a/*
Rm-rf / var/lib/ceph/mds/ceph-a/*
Mkdir-p / var/lib/ceph/osd/ceph-0
Mkdir-p / var/lib/ceph/osd/ceph-1
Mkdir-p / var/lib/ceph/mon/ceph-a
Mkdir-p / var/lib/ceph/mds/ceph-a
6. Start
Service ceph-a start
7. Perform health check-ups
Ceph health
8. Mount 5 error occurred in ext4 for disk
Later use
Mkfs.xfs-f / dev/sda7
I'm almost done.
9. Operate on the client:
Sudo mkdir / mnt/mycephfs
Sudo mount-t ceph {ip-address-of-monitor}: 6789 ip-address-of-monitor / mnt/mycephfs
III. Ceph-deploy installation
1. Download
Https://github.com/ceph/ceph-deploy/archive/master.zip
2 、
Apt-get install python-virtualenv
. / bootstrap
3 、
Ceph-deploy install ubuntu1
4 、
Ceph-deploy new ubuntu1
5 、
Ceph-deploy mon create ubuntu1
6 、
Ceph-deploy gatherkeys
If there is no keyring in the error prompt, it will be executed:
Ceph-deploy forgetkeys
Will generate
{cluster-name} .client.admin.keyring
{cluster-name}. Bootstrap-osd.keyring
{cluster-name}. Bootstrap-mds.keyring
7 、
Ceph-deploy osd create ubuntu1:/dev/sdb1 (disk path)
You may encounter a mistake:
1. The disk has been mounted. Use umount.
2. Disk formatting problem, using fdisk partition and mkfs.xfs-f / dev/sdb1 format
8 、
Ceph-s
Errors may be encountered:
There is no osd.
Health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
Then execute ceph osd create
9 、
Cluster faf5e4ae-65ff-4c95-ad86-f1b7cbff8c9a
Health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
Monmap e1: 1 mons at {ubuntu1=12.0.0.115:6789/0}, election epoch 1, quorum 0 ubuntu1
Osdmap e10: 3 osds: 1 up, 1 in
Pgmap v17: 192 pgs, 3 pools, 0 bytes data, 0 objects
1058 MB used, 7122 MB / 8181 MB avail
192 active+degraded
10. Client hang-up
Note: need to mount with user name and password
10.1 View password
Cat / etc/ceph/ceph.client.admin.keyring
Ceph-authtool-print-key ceph.client.admin.keyring
AQDNE4xSyN1WIRAApD1H/glMB5VSLwmmnt7UDw==
10.2 Mount
Other:
1. Add ssh password-free authentication ssh-keygen between multiple machines
2. It is best to have a separate disk partition for storage, and there are several different ways of formatting.
3. There are always all kinds of mistakes. Can only be analyzed and solved separately.
Ceph-deploy forgetkeys above is all the contents of this article entitled "what are the installation methods of distributed storage Ceph?" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.