In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how to install snappy on Hadoop+HBase". In daily operation, I believe many people have doubts about how to install snappy on Hadoop+HBase. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "how to install snappy on Hadoop+HBase". Next, please follow the editor to study!
1. Check whether the snappy package is installed
The command is: bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/test.txt snappy
If the display message is:
12-12-03 10:30:02 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12-12-03 10:30:02 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" java.lang.RuntimeException: native snappy library not available
At org.apache.hadoop.io.compress.SnappyCodec.getCompressorType (SnappyCodec.java:123)
At org.apache.hadoop.io.compress.CodecPool.getCompressor (CodecPool.java:100)
At org.apache.hadoop.io.compress.CodecPool.getCompressor (CodecPool.java:112)
At org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor (Compression.java:264)
At org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer. (HFileBlock.java:739)
At org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit (HFileWriterV2.java:127)
At org.apache.hadoop.hbase.io.hfile.HFileWriterV2. (HFileWriterV2.java:118)
At org.apache.hadoop.hbase.io.hfile.HFileWriterV2 $WriterFactoryV2.createWriter (HFileWriterV2.java:101)
At org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create (HFile.java:394)
At org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest (CompressionTest.java:108)
Then the snappy package is not installed.
2. Download the snappy-*.tar.gz package (as long as it is compatible with the hbase version, mine is snappy-1.1.1.tar.gz), and extract it.
3. Go to the snappy directory and compile it with two commands:
. / configure
Make
4. After make, a libsnappy.so file will be generated (this is the library we need!) Normally, it appears in the current directory. / libs/libsnappy.so, but most of the time it doesn't follow the routine and goes to another folder. If there is no error in make, you can search it in the root directory, and you will definitely find this file.
5. Copy the generated libsnappy.so to the lib/native/Linux-ARCH directory of HBase. ARCH represents amd64 or i386-32. Note that this directory may not exist for the HBase of amd64. In this case, you need to create it manually:
Mkdir / opt/hbase-0.98.6.1/lib/native/Linux-amd64-64
6. If you are still not sure where HBase is looking for lib, you can modify the log level (log level) in the log4j file to debug
7. Rerun the command in step 1, and the message you see now should be:
12-12-03 10:34:35 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
12-12-03 10:34:35 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32C not available.
12-12-03 10:34:35 DEBUG util.FSUtils: Creating file:file:/tmp/test.txtwith permission:rwxrwxrwx
12-12-03 10:34:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
12-12-03 10:34:35 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12-12-03 10:34:35 WARN snappy.LoadSnappy: Snappy native library is available
12-12-03 10:34:35 WARN snappy.LoadSnappy: Snappy native library not loaded
Exception in thread "main" java.lang.RuntimeException: native snappy library not available
At org.apache.hadoop.io.compress.SnappyCodec.getCompressorType (SnappyCodec.java:123)
At org.apache.hadoop.io.compress.CodecPool.getCompressor (CodecPool.java:100)
At org.apache.hadoop.io.compress.CodecPool.getCompressor (CodecPool.java:112)
At org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor (Compression.java:264)
At org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer. (HFileBlock.java:739)
At org.apache.hadoop.hbase.io.hfile.HFileWriterV2.finishInit (HFileWriterV2.java:127)
At org.apache.hadoop.hbase.io.hfile.HFileWriterV2. (HFileWriterV2.java:118)
At org.apache.hadoop.hbase.io.hfile.HFileWriterV2 $WriterFactoryV2.createWriter (HFileWriterV2.java:101)
At org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.create (HFile.java:394)
At org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest (CompressionTest.java:108)
At org.apache.hadoop.hbase.util.CompressionTest.main (CompressionTest.java:138)
8. As you can see, snappy can be found, but it hasn't been not loaded yet. If you want to load it, you also need to copy the local library of hadoop to the same path as libsnappy.so. The local library path of hadoop is:
Hadoop-1.2.1/lib/native/Linux-ARCH/libhadoop.so
If it is not available in this path, you can download the corresponding hadoop package to https://archive.apache.org/dist/hadoop/core/ according to the tar.gz version you are using. After decompression, you can find the required file.
9. Run the test command again (the command in step 1) to get:
12-12-03 10:37:48 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32 not available.
12-12-03 10:37:48 INFO util.ChecksumType: Checksum can use java.util.zip.CRC32
12-12-03 10:37:48 INFO util.ChecksumType: org.apache.hadoop.util.PureJavaCrc32C not available.
12-12-03 10:37:48 DEBUG util.FSUtils: Creating file:file:/tmp/test.txtwith permission:rwxrwxrwx
12-12-03 10:37:48 INFO util.NativeCodeLoader: Loaded the native-hadoop library
12-12-03 10:37:48 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12-12-03 10:37:48 WARN snappy.LoadSnappy: Snappy native library is available
12-12-03 10:37:48 INFO snappy.LoadSnappy: Snappy native library loaded
12-12-03 10:37:48 INFO compress.CodecPool: Got brand-new compressor
12-12-03 10:37:48 DEBUG hfile.HFileWriterV2: Initialized with CacheConfig:disabled
12-12-03 10:37:49 WARN metrics.SchemaConfigured: Could not determine table and column family of the HFile path file:/tmp/test.txt. Expecting at least 5 path components.
12-12-03 10:37:49 INFO compress.CodecPool: Got brand-new decompressor
SUCCESS
See SUCCESS, indicating that the installation is successful, snappy package can be used, done.
At this point, the study on "how to install snappy on Hadoop+HBase" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.