In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you the sample analysis of Ceph source code compilation and packaging. I hope you will get something after reading this article. Let's discuss it together.
1.1clone repository
Source code packaging docker 7u machine:
Lvjian100081200005.et15sqa
Git clone-- recursive https://github.com/ceph/ceph.git
# checkout produces a certain version
Git clone-b v10.2.2 https://github.com/ceph/ceph.git
Cd ceph
Git status
Git submodule update-init-recursive
Git checkout master
1.2 compile Ceph
Resolve dependency packages:
# related to ceph, how to obtain dependency:
Ceph.spec | grep-I require | sort-nk1 | uniq
Sudo yum install libudev-devel libblkid-devel libuuid-devel libaio-devel fuse-devel\
Xfsprogs-devel expat-devel fcgi-devel libatomic_ops-devel python-libs python-babel python-sphinx gperftools-devel\
Boost-devel libunwind-devel libcurl-devel nss-devel Cython keyutils-libs-devel openssl-devel openldap-devel glibc-devel glibc-static-b current
# leveldb is related to rocksdb
Sudo yum install snappy-devel bzip2-devel gflags-devel leveldb-devel
You don't need to install rocksdb specifically, because it's already in the ceph directory. The compilation process will report an error because "gflags namespace has been found as google". Add namespace redefinition before the main function:
/ /
/ / main
/ /
Namespace gogle = gflags
Int main (int argc, char** argv) {
Google::SetUsageMessage (std::string ("\ nUSAGE:\ n") + std::string (argv [0]) +
"[OPTIONS]...")
Google::ParseCommandLineFlags (& argc, & argv, false)
Compile rocksdb:
Sudo yum install devtoolset-2-gcc devtoolset-2-gcc-c++-b current
Sudo yum install devtoolset-2-binutils-b current
Source / opt/rh/devtoolset-2/enable
Export CC=/opt/rh/devtoolset-2/root/usr/bin/gcc
Export CPP=/opt/rh/devtoolset-2/root/usr/bin/cpp
Export CXX=/opt/rh/devtoolset-2/root/usr/bin/c++
Make-J2
Compile ceph:
. / do_cmake.sh
Cd build
Cmake-DCMAKE_INSTALL_PREFIX=/opt/ceph-DCMAKE_C_FLAGS= "- O0-G3"..
Make-J24
For version 10.2.2, modify the do_autogen.sh or configure with parameters and put the installation directory under / opt/ceph:
. / configure\
-prefix=/opt/ceph-- sbindir=/opt/ceph/sbin-- localstatedir=/opt/ceph/var-- sysconfdir=/opt/ceph/etc\
-with-debug $with_profiler-- with-nss-- without-cryptopp-- with-radosgw\
Modify the ceph-detect-init directory, put the installation directory under / opt/ceph, and open src/ceph-detect-init/Makefile.am
. / configure\
-prefix=/opt/ceph-- sbindir=/opt/ceph/sbin-- localstatedir=/opt/ceph/var-- sysconfdir=/opt/ceph/etc\
-with-debug $with_profiler-- with-nss-- without-cryptopp-- with-radosgw\
. / autogen.sh
. / configure or. / do_autogen.sh-J-L-D1
Make-J24
Make install DESTDIR=/opt/ceph
1.3RPM packing
RPM packaging, depending on the package solution:
# Dependencies encountered in build RPM package,build RPM
Sudo yum install valgrind-devel sharutils libxml2-devel libxslt-devel-b current
Sudo yum install python-nose python-requests python-virtualenv python-devel python-setuptools Cython python-pip binutils-devel-b current
Sudo yum install redhat-lsb-core yum-utils-b current
Sudo yum install java-devel junit hdparm-b current
Setuptools
# the following packages are not available on 7u and need to be downloaded manually
Sudo rpm-ivh jq-1.5-4.fc25.x86_64.rpm
Sudo rpm-ivh yasm-1.3.0-3.fc24.x86_64.rpm
Sudo rpm-ivh xmlstarlet-1.6.1-6.fc24.x86_64.rpm
Sudo rpm-ivh oniguruma-6.0.0-1.fc25.x86_64.rpm
Sudo rpm-ivh fcgi-devel-2.4.0-15.3.x86_64.rpm
Sudo rpm-ivh lib64fcgi0-2.4.0-15.mga4.x86_64.rpm
Sudo rpm-ivh fcgi-2.4.0-15.3.x86_64.rpm
Sudo rpm-ivh jemalloc-4.2.1-1.fc26.x86_64.rpm
Sudo rpm-ivh jemalloc-devel-4.2.1-1.fc26.x86_64.rpm
In addition, you need to change "WITH_PYTHON3=ON" in the ceph.spec file to "WITH_PYTHON3=OFF".
. / make_dist.sh
Close debug package
Echo "debug_package {nil}" > .rpmmacros
Modify the ceph.spec file:
% global _ prefix / opt/ceph
Global _ libexecdir {_ exec_prefix} / lib
% global _ _ libtool_path ^% {buildroot}% {_ libdir} /. * .la$
Global _ sysconfdir {_ prefix} {_ sysconfdir}
Global _ localstatedir {_ prefix} {_ localstatedir}
Debug_package {nil}
# mv% {buildroot} / sbin/mount.ceph% {buildroot} / usr/sbin/mount.ceph
# mv% {buildroot} / sbin/mount.fuse.ceph% {buildroot} / usr/sbin/mount.fuse.ceph
Mv {buildroot} / sbin/mount.ceph {buildroot} {_ prefix} / sbin/mount.ceph
Mv {buildroot} / sbin/mount.fuse.ceph {buildroot} {_ prefix} / sbin/mount.fuse.ceph
Mv {buildroot} / usr/sbin/ceph-disk {buildroot} {_ prefix} / sbin/ceph-disk
Mv {buildroot} / usr/bin/ceph-detect-init {buildroot} {_ prefix} / bin/ceph-detect-init
Rpmbuild-ba ceph.spec
Package with t-abs:
T-abs ceph.spec
Sudo rpm-ivh librados2-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh libradosstriper1-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh python-rados-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-radosgw-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-base-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-common-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh librbd1-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh rbd-fuse-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh rbd-mirror-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh rbd-nbd-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh python-rbd-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh librgw2-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh libcephfs1-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-fuse-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh python-cephfs-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-libs-compat-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh python-ceph-compat-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-osd-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-mds-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-mon-10.2.2-0.rhel7.x86_64.rpm
# devel-related packages may not be installed
Sudo rpm-ivh librados2-devel-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh librgw2-devel-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh libradosstriper1-devel-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh librbd1-devel-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh libcephfs1-devel-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-devel-compat-10.2.2-0.rhel7.x86_64.rpm
# testing, debug and java related packages may not be installed
Sudo rpm-ivh ceph-test-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh ceph-debuginfo-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh libcephfs_jni1-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh libcephfs_jni1-devel-10.2.2-0.rhel7.x86_64.rpm
Sudo rpm-ivh cephfs-java-10.2.2-0.rhel7.x86_64.rpm
1.4 Test
Test the ceph cluster locally:
# start vstart
. / vstart.sh-d-n-x-l
. / bin/ceph-s
If the local file system is ext4, the following values need to be modified:
Filestore_max_xattr_value_size_other = 4086
Osd max object name len = 1024
Osd max object namespace len = 256
Start the osd node:
Init-ceph start osd.*
Do the rados test locally:
$rados-p rbd bench 30 write
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 30 seconds or 0 objects
Object prefix: benchmark_data_lvjian100081200005.et15sqa_44248
Sec Cur ops started finished avg MB/s cur MB/s last lat (s) avg lat (s)
0 000 0 0
1 16 38 22 87.9956 88 0.233292 0.179826
2 16 38 22 43.9967 0-0.179826
3 16 38 22 29.3311 0-0.179826
4 16 38 22 21.9984 0-0.179826
5 16 38 22 17.5987 0-0.179826
6 16 82 66 43.9966 35.2 0.136914 1.34638
7 16 82 66 37.7113 0-1.34638
8 16 82 66 32.9974 0-1.34638
9 16 82 66 29.331 0-1.34638
10 16 82 66 26.3979 0-1.34638
11 16 113 97 35.2699 24.8 0.16824 1.77199
12 16 113 97 32.3307 0-1.77199
13 16 113 97 29.8437 0-1.77199
14 16 113 97 27.712 0-1.77199
15 16 113 97 25.8646 0-1.77199
16 16 137 121 30.2476 19.2 0.0883345 2.09566
17 16 138 122 28.7035 4 0.164464 2.07983
18 16 138 122 27.1089 0-2.07983
19 16 138 122 25.6821 0-2.07983
2016-10-18 14 avg lat 0415 min lat: 0.0330393 max lat: 5.18978 avg lat: 2.07983
Sec Cur ops started finished avg MB/s cur MB/s last lat (s) avg lat (s)
20 16 138 122 24.398 0-2.07983
21 16 149 133 25.3312 11 5.10087 2.3338
22 16 172 156 28.3612 92 0.181053 2.16421
23 16 172 156 27.1281 0-2.16421
24 16 172 156 25.9978 0-2.16421
25 16 172 156 24.9579 0-2.16421
26 16 172 156 23.9979 0-2.16421
27 16 203 187 27.7013 24.8 0.133892 2.24739
28 16 203 187 26.712 0-2.24739
29 16 203 187 25.7909 0-2.24739
30 16 203 187 24.9312 0-2.24739
31 16 203 187 24.127 0-2.24739
Total time run: 31.372199
Total writes made: 204
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 26.0103
Stddev Bandwidth: 23.3087
Max bandwidth (MB/sec): 92
Min bandwidth (MB/sec): 0
Average IOPS: 6
Stddev IOPS: 5
Max IOPS: 23
Min IOPS: 0
Average Latency (s): 2.46023
Stddev Latency (s): 2.47093
Max latency (s): 5.30316
Min latency (s): 0.0224793
Create a rbd:
[jianshu.ljs@lvjian100081200005.et15sqa / home/jianshu.ljs/ceph_10.2.2/src]
$rbd create rbd_test_image_01-- size 1000
[jianshu.ljs@lvjian100081200005.et15sqa / home/jianshu.ljs/ceph_10.2.2/src]
$rbd ls
Rbd_test_image_01
[jianshu.ljs@lvjian100081200005.et15sqa / home/jianshu.ljs/ceph_10.2.2/src]
$rbd info rbd_test_image_01
Rbd image 'rbd_test_image_01':
Size 1000 MB in 250 objects
Order 22 (4096 kB objects)
Block_name_prefix: rbd_data.102e74b0dc51
Format: 2
Features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
Flags:
After reading this article, I believe you have some understanding of "sample analysis of Ceph source code compilation and packaging". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.