In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how Spark connects to Hive locally in Windows. The article is very detailed and has certain reference value. Interested friends must finish reading it!
Solution 1: Jdbc directly connected HivePS: confirm whether the Hive service is started, enter the Hive server and do the following. First open metastore hive-- service metastore & 2. Enable hiveserver2. The default port is 10000 hive-- service hiveserver2 & 3. Verify whether netstat-ntulp is enabled successfully | grep 10000 output: tcp 00 0.0.0.0 grep 10000 0.0.0.0 LISTEN 27799/java indicates the code implementation for enabling the service successfully
Scheme 2: SparkSession directly connected Hive initializes and creates Sparksession, code implementation
Use Sparksession query
Query result
Note: there is no need to initialize the declaration of registration dialect to connect to Hive
Note that if you do not load hive-site.xml, you need to configure it in config:
This configuration comes from the configuration item in the Hive server conf/hive-site.xml
Hosts also needs to be configured locally.
Spark mode Windows development environment exception and solution exception 1:Caused by: java.lang.RuntimeException: The root scratch dir: / tmp/hive on HDFS should be writable. Current permissions are: rwx-
Solution: 1. Configure Hadoop local environment variables
two。 Open the cmd command window and go to the local spark-2.3.1-bin-hadoop2.7\ bin directory
3. Run the following three commands:% HADOOP_HOME%\ bin\ winutils.exe ls\ tmp\ hive%HADOOP_HOME%\ bin\ winutils.exe chmod 777\ tmp\ hive%HADOOP_HOME%\ bin\ winutils.exe ls\ tmp\ hive
4. Verification effect
Exception 2:Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: HzCluster
Reason: the Hadoop cluster could not be found when the local Spark connects to the Hive cluster. You need to load the hdfs configuration. Solution: 1. Copy the core-site.xml and hdfs-site.xml files in the hadoop/conf directory to the project ${path} / conf directory
two。 Copy the hive-site.xml file in the hive/conf directory to the project ${path} / conf directory
[important] modify the contents of the hive-site.xml file, leaving only the following configuration
3. Load configuration files for Hive and Hdfs when initializing Sparksession
The above is all the contents of the article "how to access Spark locally to Hive in Windows". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.