In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Due to the limitation of blog text, we can only write separately:
Hadoop 3.0.0-alpha2 installation (1) Link:
Http://laowafang.blog.51cto.com/251518/1912342
5. FAQ1, problem: the local library is inconsistent with the version of the current operating system:
$/ data/hadoop/bin/hadoopchecknative-a # warning message:
2017-03-27 18 02R 12116 WARN util.NativeCodeLoader:Unable to load native-hadoop library for your platform... Using builtin-javaclasses where applicable
Native library checking:
Hadoop: false
Zlib: false
Zstd: false
Snappy: false
Lz4: false
Bzip2: false
Openssl: false
ISA-L: false
See the information:
(1) check the versions supported by GLIBC of hadoop native:
# strings/data/hadoop/lib/native/libhadoop.so.1.0.0 | grep GLIBC
GLIBC_2.2.5
GLIBC_2.12
GLIBC_2.7
GLIBC_2.14
GLIBC_2.6
GLIBC_2.4
GLIBC_2.3.4
(2) check the GLIBC version of the local linux:
# strings / lib64/libc.so.6 | grep GLIBC
GLIBC_2.2.5
GLIBC_2.2.6
GLIBC_2.3
GLIBC_2.3.2
GLIBC_2.3.3
GLIBC_2.3.4
GLIBC_2.4
GLIBC_2.5
GLIBC_2.6
GLIBC_2.7
GLIBC_2.8
GLIBC_2.9
GLIBC_2.10
GLIBC_2.11
GLIBC_2.12
GLIBC_PRIVATE
The above error can be seen, and there is no GLIBC 2.14 on linux, so the only way to report an error is to compile the hadoop source code on the local linux with the local c library, so that the local c library will be used when running hadoop.
Solution: the first way:
# tar-jxvf glibc-2.14.tar.bz2
# cd glibc-2.14
# tar-jxvf../glibc-linuxthreads-2.5.tar.bz2
# cd..
# export CFLAGS= "- g-O2"
#. / glibc-2.14/configure-prefix=/usr\
-- disable-profile-- enable-add-ons\
-- with-headers=/usr/include\
-- with-binutils=/usr/bin\
# make-j `grep processor / proc/cpuinfo | wc-l`
# make install
# Note: install the compilation process:
(1) extract the glibc-linuxthreads to the glibc directory.
(2) you cannot run configure under the current directory of glibc.
(3) add the optimization switch, export CFLAGS= "- g-O2", otherwise an error will occur.
# / data/hadoop/bin/hadoopchecknative-a # verified successfully
2017-03-28 09 INFObzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 librarysystem-native
2017-03-28 09 INFOzlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
Hadoop: true / data/hadoop-3.0.0-alpha2/lib/native/libhadoop.so.1.0.0
Zlib: true / lib64/libz.so.1
Zstd: false
Snappy: true / usr/lib64/libsnappy.so.1
Lz4: true revision:10301
Bzip2: true / lib64/libbz2.so.1
Openssl: true / usr/lib64/libcrypto.so
ISA-L: false libhadoop wasbuilt without ISA-L support
2017-03-28 09 43190 INFO util.ExitUtil:Exiting with status 1
[root@master opt] # file/data/hadoop-3.0.0-alpha2/lib/native/libhadoop.so.1.0.0
/ data/hadoop-3.0.0-alpha2/lib/native/libhadoop.so.1.0.0:ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, notstripped
# Note: the above red font has not been resolved for the time being, access to data will not affect the use for the time being, if you have knowledge, please let me know, thank you.
$. / start-all.sh # reboot message
WARNING: Attempting to start all ApacheHadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommendedproduction deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [master]
Starting datanodes
Starting secondary namenodes [master]
Starting resourcemanager
Starting nodemanagers
Solution: the second method is to recompile the hadoop native library # I have not tested it, refer to the following:
Http://zkread.com/article/1187940.html
Http://forevernull.com/category/%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3/
6. Other 6.1 compressed collections
At present, lzo,gzip,snappy,bzip2 is widely used in Hadoop. The author introduces the advantages and disadvantages and application scenarios of these four compression formats according to practical experience, so that we can choose different compression formats according to the actual situation in practice.
1. Gzip compression
Advantages: the compression ratio is relatively high, and the compression / decompression speed is also relatively fast; hadoop itself supports, in the application to deal with gzip format files is the same as directly dealing with text; there is a hadoop native library; most linux systems have their own gzip commands, easy to use.
Cons: split is not supported.
Application scenarios: when each file is compressed within 130m (within 1 block size), you can consider using the gzip compression format. For example, a day or an hour of logs are compressed into a gzip file, and concurrency is achieved through multiple gzip files when running mapreduce programs. Hive programs, streaming programs, and mapreduce programs written by java are exactly the same as text processing, and the original program does not need to be modified after compression.
2. Lzo compression
Advantages: compression / decompression speed is also relatively fast, reasonable compression ratio; support split, is the most popular compression format in hadoop; support hadoop native library; can install lzop command in linux system, easy to use.
Disadvantages: the compression ratio is lower than gzip; hadoop itself does not support, need to be installed; in the application of lzo format files need to do some special processing (in order to support split need to build index, also need to specify inputformat as lzo format).
Application scenario: for a large text file, those larger than 200m after compression can be considered, and the larger the single file, the more obvious the advantages of lzo.
3. Snappy compression
Advantages: high compression speed and reasonable compression ratio; support hadoop native library.
Disadvantages: do not support split; compression ratio is lower than gzip; hadoop itself does not support, need to be installed; there is no corresponding command under linux system.
Application scenario: when the map output of a mapreduce job is relatively large, it is used as a compression format for intermediate data from map to reduce, or as the output of one mapreduce job and the input of another mapreduce job.
4. Bzip2 compression
Advantages: support split; has a high compression ratio, higher than gzip compression ratio; hadoop itself supports, but does not support native; in the linux system with its own bzip2 command, easy to use.
Disadvantages: slow compression / decompression speed; does not support native.
Application scenario: it can be used as the output format of mapreduce job when the speed is not high, but the compression ratio is high, or the data after output is relatively large, and the processed data needs to be compressed, archived, reduced disk space and used less data in the future. Or if you want to compress a single large text file to reduce storage space, you need to support split and be compatible with previous applications (that is, the application does not need to be modified).
6.2 Clean up
The contents of the test configuration file will be restarted frequently during the installation process. It is recommended to close each time, clean up the logs, and clean up the files created below:
# mkdir-p/data/ {hdfsname1,hdfsname2} / hdfs/name
# mkdir-p/data/ {hdfsdata1,hdfsdata2} / hdfs/data
# rm-rf / data/hadoop/tmp
# found that there are still a lot of things to do, there is time to continue to improve, and then. I don't know when it is ^ _ ^
Political Commissar Liu 2017-04-01
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.