Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Install snappy for hadoop cdh version

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

one。 Install protobuf

Ubuntu system

First, make sure that you have gcc gcc-c++ libtool cmake with these environments.

1 create a file libprotobuf.conf write content / usr/local/lib in the / etc/ld.so.conf.d/ directory, otherwise it will report error while loading shared libraries: libprotoc.so.8: cannot open shared obj

2../configure make&&make install

two。 Verify that installation is complete

Protoc-version

Libprotoc 2.5.0

two。 Install the snappy native library

Http://www.filewatcher.com/m/snappy-1.1.1.tar.gz.1777992-0.html

Download snappy-1.1.1.tar.gz

Decompress. / configure

Make&& makeInstall

Check / usr/local/lib

Libsnappy.a

Libsnappy.la

Libsnappy.so

Libsnappy.so.1

Libsnappy.so.1.2.0

three。 Compile the source code of cdh hadoop. (join snappy support)

Download link http://archive.cloudera.com/cdh6/cdh/5/

Hadoop-2.6.0-cdh6.11.0-src.tar.gz

Decompress. Compile with maven

4. Check the file

Hadoop-2.6.0-cdh6.11.0/hadoop-dist/target/hadoop-2.6.0-cdh6.11.0/lib/native

Whether there is a local library of hadoop and snappy under the directory

Copy the files under this directory to the lib/native directory under hadoop in the hadoop cluster and to the lib/native/Linux-amd64-64 directory under hbase. If not, create a new file, and each node needs to copy it.

Cp ~ apk/hadoop-2.6.0-cdh6.11.0/hadoop-dist/target/hadoop-2.6.0-cdh6.11.0/lib/native/* ~ / app/hadoop/lib/native/

6. Synchronize local libraries to other nodes

7. Configure core-site.xml for hadoop

Join

Io.compression.codecs

Org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec

Configure mapred-site.xml

Join

Mapreduce.map.output.compress

True

Mapreduce.map.output.compress.codec

Org.apache.hadoop.io.compress.SnappyCodec

Mapreduce.admin.user.env

LD_LIBRARY_PATH=/home/hadoop/app/hadoop/lib/native

Configure hbase-site.xml for hbase

Join

Hbase.block.data.cachecompressed

True

8. Restart hdfs and yarn of hadoop

9. Verify that the snappy is successful.

Hadoop checknative

18-03-07 17:33:36 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version

17:33:36 on 18-03-07 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library

Native library checking:

Hadoop: true / home/hadoop/app/hadoop/lib/native/libhadoop.so

Zlib: true / lib/x86_64-linux-gnu/libz.so.1

Snappy: true / home/hadoop/app/hadoop/lib/native/libsnappy.so.1

Lz4: true revision:10301

Bzip2: false

Openssl: true / usr/lib/x86_64-linux-gnu/libcrypto.so

See that snappy has successfully supported

Run the mapreduce task

Hadoop jar ~ / app/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh6.11.0.jar wordcount / input/gisData / output

If it works properly. It proves that there is nothing wrong with snappy. If there is.

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy () Z

Please check the local library configuration of mapred-site.xml

10 start hbase.

First create a table of snappy

Create 'snappyTest', {NAME= >' frank.Computation = > 'SNAPPY'}

Descibe 'snappyTest'

TTL = > 'FOREVER', COMPRESSION = >' SNAPPY', MIN_VERSIONS = >'0' just see the snappy.

The point is that we need to compress the existing tables.

Can be executed outside the shell

$echo "disable 'snappyTest2'" | hbase shell # disable the table

$echo "desc 'snappyTest2'" | hbase shell # View table structure

| $echo "alter 'snappyTest2', {NAME= >' fallow journal compression = > 'SNAPPY'}" | hbase shell # compression is modified to snappy |

$echo "enable 'snappyTest2'" | hbase shell # uses this table

$echo "major_compact 'snappyTest2'" | hbase shell # it is best to make the table region compact once

You can also hbase shell into shell manual compression. After compression, it will be found that the data has a compression ratio of about 40%.

The java code only needs to create the Hbase table

HColumnDescriptor HColumnDesc = new HColumnDescriptor ("data")

HColumnDesc.setCompressionType (Algorithm.SNAPPY); / / this sentence is the key

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report