In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
The read-write separation architecture provided by the community is as follows:
From the architecture diagram, you can see that Kylin will access the HDFS of the two clusters. It is recommended that the NameService of the two clusters must not be the same. Especially when NameNode HA is enabled in the cluster, the same NameService will cause problems when components cannot distinguish between NameService when accessing HDFS across clusters.
Two clusters:
Cluster1 (hive Cluster): hdfs.hive,yarn,zookeeper,mr
Cluster2 (hbase Cluster): hdfs,hbase,zookeeper,yarn,kylin
First, match a KYLIN_HOME to the home directory of KYLIN.
Our kylin is installed on cluster2, as long as the environment variable is configured on cluster2.
Configure a lot of hadoo parameters for cluster1 on cluster2
I create a separate directory under $KYLIN_HOME, called hadoop_conf, and I need these files in it.
Let's explain which parameters use cluster1 and which files use cluster2 parameters. If you use the parameters of cluster1, just copy it directly from cluster1.
These files are all in the $KYLIN/HOME/hadoop_conf directory.
Core-site.xml----cluster1 this is configured with the address of hdfs
Hbase-site.xml---cluster2
The parameter of nameservice is configured in hdfs-site.xml----cluster2, and nameservice cannot be parsed without it.
Hive--site.xml---cluster1
Mapred-site.xml--cluster1
We use kylin users to start and service, so configure the kylin user environment variables and modify the file ~ / .bashrc
Add these.
Export HBASE_CONF_DIR=$KYLIN/HOME_hadoop_conf
Export HIVE_CONF=$KYLIN/HOME_hadoop_conf
Export HADOOP_CONF_DIR=$KYLIN_HOME/hadoop_conf
!
Export HBASE_CONF_DIR=$KYLIN_HOMEhadoop_conf
This HBASE_CONF_DIR is very important, because kylin uses HBASE to read hdfs--site and core-site.xml files to read the environment variables of HDFS. If it is not added, it will read the configuration of HBASE in the CDH directory by default. Because of this thing, I have been stuck for several days and worked for several days to find out. I want to cry. And the KYLIN official didn't write it at all, cheating my father.
!!
Configure conf/kylin.properties and tomcat/conf/server.xml
Conf/kylin.properties is configured according to its own needs, mainly configuring the relevant parameters of hive and hbase
There are two main points to pay attention to in tomcat/conf/server.xml:
1.keystore
You need to generate the corresponding keystore file, or comment out this paragraph directly
two。 I did not modify this to run normally in the test environment. When the production machine was deployed, the ui at the front end was opened, and there was a problem that the models, configuration, and environment variables could not be loaded. At the same time, the prompt of "failed to take actions" popped up in the foreground.
After several days of searching, it was found that there was a problem with the front-end access resources during decompression, so turn off the compression.
In
Medium
Change compression= "on" to compression= "off"
Modify KYLIN_HOME/conf/kylin.perproties
Kylin.source.hive.client=beeline
# # change the url of jdbc to the hive address of cluster1
Kylin.source.hive.beeline-params=-n root-- hiveconf hive.security.authorization.sqlstd.confwhitelist.append='mapreduce.job. | dfs.'-u jdbc:hive2://stream3:25002
# # change to the hdfs address of cluster2
Kylin.storage.hbase.cluster-fs=hdfs://stream-master1:8020
In addition, I failed in step 16 of the build task because there were not enough resources allocated, so add these two parameters of mapreduce.map.memory.mb and mapreduce.reduce.memory.mb to KYLIN_HOIME/conf/kylin_job_conf.xml to make the value a little larger.
And then start it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.