Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The installation process of Spark Sql

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the relevant knowledge of "the installation process of Spark Sql". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

First, start zk, start hdfs cluster, start hive service (hive-- service metastore)

2. Modify the hive-site.xml configuration file under conf in spark (the same as the client configuration file of hive)

Note: the configuration file here only needs to be modified by one host, which is equivalent to client

Start the spark cluster

Start spark sql (. / bin/spark-sql-- master spark://node11:7077-- executor-memory 512m)

Note: if ip is configured in the configuration file spark-env.sh, the command needs to use ip, and if the configuration file is configured with a hostname, then the command should use the hostname

SparkSQL thrift server environment building

1. Add configuration to hive-site.xml

Hive.server2.thrift.min.worker.threads

five

Minimum number of Thrift worker threads

Hive.server2.thrift.max.worker.threads

five hundred

Maximum number of Thrift worker threads

Hive.server2.thrift.port

10000

Port number of HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT

Hive.server2.thrift.bind.host

Node12

Bind host on which to run the HiveServer2 Thrift interface.Can be overridden by setting$HIVE_SERVER2_THRIFT_BIND_HOST

2. Start spark thrift server

. / sbin/start-thriftserver.sh-- master spark://192.168.57.4:7077-- executor-memory 512m

3. After startup, you can access it through beeline under the bin directory.

. / bin/beeline

! connect jdbc:hive2://node12:10000

Note: in this way, you can enter the sparksql console, but you cannot query and report an error. You need to copy the core-site.xml and hdfs-site.xml in the hdfs cluster to the spark under conf (for each spark cluster node).

This is the end of the content of "Spark Sql installation process". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report