Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to install and configure HUE

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "how to install and configure HUE", the content is easy to understand, clear, hope to help you solve your doubts, let the editor lead you to study and learn "how to install and configure HUE" this article.

HUE installation and configuration

1. Download HUE: http://cloudera.github.io/hue/docs-3.0.0/manual.html#_hadoop_configuration

two。 Install HUE related dependencies (under root)

Redhat

Ubuntu

Gcc

Gcc

Glossy +

Glossy +

Libxml2-devel

Libxml2-dev

Libxslt-devel

Libxslt-dev

Cyrus-sasl-devel

Libsasl2-dev

Cyrus-sasl-gssapi

Libsasl2-modules-gssapi-mit

Mysql-devel

Libmysqlclient-dev

Python-devel

Python-dev

Python-setuptools

Python-setuptools

Python-simplejson

Python-simplejson

Sqlite-devel

Libsqlite3-dev

Ant

Ant

Libsasl2-dev

Cyrus-sasl-devel

Libsasl2-modules-gssapi-mit

Cyrus-sasl-gssapi

Libkrb5-dev

Krb5-devel

Libtidy-0.99-0

Libtidy (For unit tests only)

Mvn

Mvn (From maven2 package or tarball)

Openldap-dev / libldap2-dev

Openldap-devel

$yum install-y gcc mvn openldap-dev + libxml2-devel libxslt-devel cyrus-sasl-devel cyrus-sasl-gssapi mysql-devel python-devel python-setuptools python-simplejson sqlite-devel ant libsasl2-dev libsasl2-modules-gssapi-mit libkrb5-dev libtidy-0.99-0 mvn openldap-dev

3. Modify the pom.xml file

$vim / opt/hue/maven/pom.xml

A.) Modify hadoop and spark versions

2.6.0

2.6.0

1.4.0

B.) Change hadoop-core to hadoop-common

Hadoop-common

C.) Change the version of hadoop-test to 1.2.1

Hadoop-test

1.2.1

D.) Delete the two ThriftJobTrackerPlugin.java files in the following two directories:

/ usr/hdp/hue/desktop/libs/hadoop/java/src/main/java/org/apache/hadoop/thriftfs/ThriftJobTrackerPlugin.java

/ usr/hdp/hue/desktop/libs/hadoop/java/src/main/java/org/apache/hadoop/mapred/ThriftJobTrackerPlugin.java

4. Compile

$cd / opt/hue

$make apps

5. Start the HUE service

$. / build/env/bin/supervisor

$ps-aux | grep "hue"

$kill-9

6. Hue.ini parameter configuration

$vim / usr/hdp/hue/hue-3.10.0/desktop/conf/hue.ini

A.) [desktop] configuration

[desktop]

# Webserver listens on this address and port

Http_host=xx.xx.xx.xx

Http_port=8888

# Time zone name

Time_zone=Asia/Shanghai

# Webserver runs as this user

Server_user=hue

Server_group=hue

# This should be the Hue admin and proxy user

Default_user=hue

# This should be the hadoop cluster admin

Default_hdfs_superuser=hdfs

[hadoop]

[[hdfs_clusters]]

[default]

# Enter the filesystem uri

# if HDFS is not configured with HA, follow the configuration below

Fs_defaultfs=hdfs://xx.xx.xx.xx:8020 # # hadoop NameNode node

# if HDFS is configured with HA, follow the configuration below

Fs_defaultfs=hdfs://mycluster # # logical name, consistent with core-site.xml 's fs_defaultfs

# NameNode logical name.

# # logical_name=carmecluster

# Use WebHdfs/HttpFs as the communication mechanism.

# Domain should be the NameNode or HttpFs host.

# Default port is 14000 for HttpFs.

# if HDFS is not configured with HA, follow the configuration below

Webhdfs_url= http://xx.xx.xx.xx:50070/webhdfs/v1

# if HDFS configures HA, HUE can only access HDFS; through Hadoop-httpfs to manually install Hadoop-httpfs:$ sudo yum install Hadoop-httpfs and start the Hadoop-httpfs service: $. / hadoop-httpfs start &

Webhdfs_url= http://xx.xx.xx.xx:14000/webhdfs/v1

[[yarn_clusters]]

[default]

# Enter the host on which you are running the ResourceManager

Resourcemanager_host=xx.xx.xx.xx

# The port where the ResourceManager IPC listens on

Resourcemanager_port=8050

# Whether to submit jobs to this cluster

Submit_to=True

# Resource Manager logical name (required for HA)

# # logical_name=

# Change this if your YARN cluster is Kerberos-secured

# # security_enabled=false

# URL of the ResourceManager API

Resourcemanager_api_url= http://xx.xx.xx.xx:8088

# URL of the ProxyServer API

Proxy_api_url= http://xx.xx.xx.xx:8088

# URL of the HistoryServer API

History_server_api_url= http://xx.xx.xx.xx:19888

# URL of the Spark History Server

# # spark_history_server_url= http://localhost:18088

[[mapred_clusters]]

[default]

# Enter the host on which you are running the Hadoop JobTracker

Jobtracker_host=xx.xx.xx.xx

# The port where the JobTracker IPC listens on

Jobtracker_port=8021

# JobTracker logical name for HA

# # logical_name=

# Thrift plug-in port for the JobTracker

Thrift_port=9290

# Whether to submit jobs to this cluster

Submit_to=False

[beeswax]

# Host where HiveServer2 is running.

# If Kerberos security is enabled, use fully-qualified domain name (FQDN).

Hive_server_host=xx.xx.xx.xx

# Port where HiveServer2 Thrift server runs on.

Hive_server_port=10000

# Hive configuration directory, where hive-site.xml is located

Hive_conf_dir=/etc/hive/conf

# Timeout in seconds for thrift calls to Hive service

# # server_conn_timeout=120

[hbase]

# Comma-separated list of HBase Thrift servers for clusters in the format of'(name | host:port)'.

# Use full hostname with security.

# If using Kerberos we assume GSSAPI SASL, not PLAIN.

Hbase_clusters= (Cluster | xx.xx.xx.xx:9090)

# if an error occurs when connecting to hbase, enable the service $nohup hbase thrift start &

[zookeeper]

[[clusters]]

[default]

# Zookeeper ensemble. Comma separated list of Host/Port.

# e.g. Localhost:2181,localhost:2182,localhost:2183

Host_ports=xx.xx.xx.xx:2181,xx.xx.xx.xx:2181,xx.xx.xx.xx:2181

[liboozie]

# The URL where the Oozie service runs on. This is required in order for

# users to submit jobs. Empty value disables the config check.

Oozie_url= http://xx.xx.xx.xx:11000/oozie

B.) Hadoop related configuration

Hdfs-site.xml profile

Dfs.webhdfs.enabled

True

Core-site.xml profile

Hadoop.proxyuser.hue.hosts

*

Hadoop.proxyuser.hue.groups

*

If HUE server is outside the hadoop cluster, you can access HDFS. HttpFS server by running the HDFS service. The HttpFS service only needs an open

Httpfs-site.xml profile

Httpfs.proxyuser.hue.hosts

*

Httpfs.proxyuser.hue.groups

*

C.) MapReduce 0.20 (MR1) related configuration

HUE communicates with JobTracker through a jar package under mapreduce's lib folder

If JobTracker and HUE are on the same host, copy him

$cd / usr/share/hue

$cp desktop/libs/hadoop/java-lib/hue-plugins-*.jar / usr/lib/hadoop-0.20-mapreduce/lib

If JobTracker is running on a different host, you need scp's Hue plugins jar to JobTracker host

Restart JobTracker by adding the following to the mapred-site.xml configuration file

Jobtracker.thrift.address

0.0.0.0:9290

Mapred.jobtracker.plugins

Org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin

Comma-separated list of jobtracker plug-ins to be activated.

D.) Oozie related configuration

Oozie-site.xml configuration

Oozie.service.ProxyUserService.proxyuser.hue.hosts

*

Oozie.service.ProxyUserService.proxyuser.hue.groups

*

These are all the contents of the article "how to install and configure HUE". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report