In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to install and use kylin. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Environment
I chose kylin version 1.5.4 because I bought a copy called "Apache kylin authoritative Guide", which is modeled on 1.5.x. In order to avoid stepping on the ground, it is consistent with the version in the book.
About the kylin installation environment, refer to Hadoop Environment, the following is my own environment, just to learn, all installations are pseudo-distributed, and do not pay attention to high availability
Ubuntu 14.04.5 LTS
Hadoop-2.7.1.tar.gz
Jdk-8u172-linux-x64.tar.gz
Hbase-1.2.5-bin.tar.gz
Apache-kylin-1.5.4-HBase1.x-bin.tar.gz
Apache-hive-1.2.1-bin.tar.gz
Pay special attention to:
The version of kylin should correspond to the version of hbase. For more information, please refer to the official website description (Hadoop Environment). In fact, the name of the kylin package can also be seen.
Note the version of hadoop and hbase (hbase hadoop version)
Versions of jdk and hbase (hbase jdk version)
Hive and jdk versions (hive jdk version)
It is best to install it in the linux environment. Under mac, when you start kylin, the script will report an error. Of course, you can change the script (mac cannot start kylin). In addition, it is not easy to install under Ubuntu, starting kylin will also report an error, change the script. Well, it's best to use centos. I've tried, and I won't make an error.
3. Installation
Download installation package, this link can be downloaded to all the installation packages of apache, but the speed is not fast, some installation packages can not be found, you can download (Apache Software Foundation Distribution Directory) here, decompress
Set environment variabl
Export JAVA_HOME=/root/jdk1.8.0_172export HADOOP_HOME=/root/hadoop-2.7.1export HIVE_HOME=/root/hive-1.2.1export HBASE_HOME=/root/hbase-1.2.5export KYLIN_HOME=/root/kylin-1.5.4export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$HBASE_HOME/bin:$KYLIN_HOME/bin
To install hadoop, the main edited files are: core-site.xml, hadoop-env.sh, hdfs-site.xml, mapred-site.xml, yarn-site.xml (all in the $HADOOP_HOME/etc/hadoop directory) (hadoop Pseudo-Distributed Operation)
Then, format namenodesystem:java.io.tmpdir and system:java.io.tmpdir and {system:user.name} are replaced with / tmp and ${user.name}, respectively, and of course you need to add the mysql-connector-java.x.jar package to hive's lib directory, especially using version 5.x, not version 6.x. Finally, run bin/hive
Install hbase: the main modified files are hbase-env.sh, hbase-site.xml (quickstart)
Modify hbase-env.sh and add export JAVA_HOME=/root/jdk1.8.0_172
Modify hbase-site.xml
Hbase.rootdir hdfs://localhost:9000/hbase hbase.cluster.distributed true hbase.zookeeper.property.dataDir / root/tmp/hbase/zookeeper
In particular, for pseudo-distributed installations, hbase.cluster.distributed should be set to true. In addition, hbase's built-in zookeeper is used here. Finally, execute bin/satrt-hbase.sh and start hbase
Install kylin
Modify check-env.sh: you can execute bin/check-env.sh first. Generally speaking, if you configure the environment variables described above, you can go through check, but there is still a problem when this script is executed under mac and ubuntu. I did not solve the problem under mac, but the problem under Ubuntu is solved. The reason is that there is a problem with the execution of get-properties.sh content under Ubuntu. However, there is no such problem under centos (installation guide)
# # original file if [$#! = 1] then echo 'invalid input' exit-1fiIFSportable'\ n'result=for I in `cat ${KYLIN_HOME} / conf/kylin.properties | grep-w "^ $1" | grep-v'^ #'| awk-F ='{n = index ($0, "=") Print substr ($0c1)}'| cut-c 1-`do: the modified file if of result=$idoneecho $result## [$#! = 1] then echo 'invalid input' exit-1fixes IFSystals'\ nthen echo result = `cat ${KYLIN_HOME} / conf/kylin.properties | grep-w "^ $1" | grep-v'^ #'| awk-F ='{n = index ($0, "=" = ") Print substr ($0c1)}'| cut-c 1-`# for i in `cat ${KYLIN_HOME} / conf/kylin.properties | grep-w "^ $1" | grep-v'^ #'| awk-F ='{n = index ($0, "="); print substr ($0jin1)}'| cut-c 1 -` # do#: # result=$i#doneecho $result
I am currently using the apache-kylin-1.5.4-HBase1.x-bin.tar.gz version, which annotates the compression-related configurations in the conf directory, including kylin_hive_conf.xml, kylin_job_conf_inmem.xml, kylin_job_conf.xml, and kylin.properties. I did not comment it out with 1.5.3 before, resulting in no snappy problems when running the build cube.
# Compression codec for htable, valid value [none, snappy, lzo, gzip, lz4] # 1.5.3 there is no snappy by default, but the hadoop I use does not have snappy compression, so either comment out the compression-related configuration or repackage hadoopkylin.hbase.default.compression.codec=none
After that, run bin/kylin.sh satrt, and after starting successfully, access http://ip:7070/kylin with the user name ADMIN and password KYLIN. Then you can run bin/sample.sh, experience kylin, restart kylin after running sample.sh, and then build cube.
Thank you for reading! This is the end of this article on "how to install and use kylin". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.