In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the operation of spark.sql hive error report how to do, the article is very detailed, has a certain reference value, interested friends must read it!
The following error report only exists for spark.sql to operate hive, but no error is reported using hive terminal
Just copy the hive-hcatalog-core-2.3.4.jar under the lib directory of hive to / spark/jars/ of spark, and reinitialize the spark.
SqlDF = oa.spark.sql ("select * from user_action limit 2")
SqlDF.show ()
Py4JJavaError Traceback (most recent call last)
In
1 sqlDF = oa.spark.sql ("select * from user_action limit 2")
-> 2 sqlDF.show ()
~ / bigdata/spark/python/pyspark/sql/dataframe.py in show (self, n, truncate)
334 ""
335 if isinstance (truncate, bool) and truncate:
-> 336 print (self._jdf.showString (n, 20))
337 else:
338 print (self._jdf.showString (n, int (truncate)
~ / bigdata/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in _ _ call__ (self, * args)
1255 answer = self.gateway_client.send_command (command)
1256 return_value = get_return_value (
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
~ / bigdata/spark/python/pyspark/sql/utils.py in deco (* a, * * kw)
61 def deco (* a, * * kw):
62 try:
-> 63 return f (* a, * * kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString ()
~ / bigdata/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value (answer, gateway_client, target_id, name)
326 raise Py4JJavaError (
327 "An error occurred while calling {0} {1} {2}.\ n".
-> 328 format (target_id, ".", name), value)
329 else:
330 raise Py4JError (
Py4JJavaError: An error occurred while calling o1022.showString.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hive.hcatalog.data.JsonSerDe
At org.apache.hadoop.hive.ql.plan.TableDesc.getDeserializerClass (TableDesc.java:74)
At org.apache.spark.sql.hive.execution.HiveTableScanExec.addColumnMetadataToConf (HiveTableScanExec.scala:121)
At org.apache.spark.sql.hive.execution.HiveTableScanExec.hadoopConf$lzycompute (HiveTableScanExec.scala:99)
At org.apache.spark.sql.hive.execution.HiveTableScanExec.hadoopConf (HiveTableScanExec.scala:96)
At org.apache.spark.sql.hive.execution.HiveTableScanExec.org$apache$spark$sql$hive$execution$HiveTableScanExec$$hadoopReader$lzycompute (HiveTableScanExec.scala:108)
At org.apache.spark.sql.hive.execution.HiveTableScanExec.org$apache$spark$sql$hive$execution$HiveTableScanExec$$hadoopReader (HiveTableScanExec.scala:103)
At org.apache.spark.sql.hive.execution.HiveTableScanExec$$anonfun$11.apply (HiveTableScanExec.scala:192)
At org.apache.spark.sql.hive.execution.HiveTableScanExec$$anonfun$11.apply (HiveTableScanExec.scala:192)
At org.apache.spark.util.Utils$.withDummyCallSite (Utils.scala:2475)
At org.apache.spark.sql.hive.execution.HiveTableScanExec.doExecute (HiveTableScanExec.scala:191)
At org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply (SparkPlan.scala:117)
At org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply (SparkPlan.scala:117)
At org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply (SparkPlan.scala:138)
At org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:151)
At org.apache.spark.sql.execution.SparkPlan.executeQuery (SparkPlan.scala:135)
At org.apache.spark.sql.execution.SparkPlan.execute (SparkPlan.scala:116)
At org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd (SparkPlan.scala:228)
At org.apache.spark.sql.execution.SparkPlan.executeTake (SparkPlan.scala:311)
At org.apache.spark.sql.execution.CollectLimitExec.executeCollect (limit.scala:38)
At org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan (Dataset.scala:2865)
At org.apache.spark.sql.Dataset$$anonfun$head$1.apply (Dataset.scala:2154)
At org.apache.spark.sql.Dataset$$anonfun$head$1.apply (Dataset.scala:2154)
At org.apache.spark.sql.Dataset$$anonfun$55.apply (Dataset.scala:2846)
At org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId (SQLExecution.scala:65)
At org.apache.spark.sql.Dataset.withAction (Dataset.scala:2845)
At org.apache.spark.sql.Dataset.head (Dataset.scala:2154)
At org.apache.spark.sql.Dataset.take (Dataset.scala:2367)
At org.apache.spark.sql.Dataset.showString (Dataset.scala:241)
At sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod)
At sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
At sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
At java.lang.reflect.Method.invoke (Method.java:498)
At py4j.reflection.MethodInvoker.invoke (MethodInvoker.java:244)
At py4j.reflection.ReflectionEngine.invoke (ReflectionEngine.java:357)
At py4j.Gateway.invoke (Gateway.java:282)
At py4j.commands.AbstractCommand.invokeMethod (AbstractCommand.java:132)
At py4j.commands.CallCommand.execute (CallCommand.java:79)
At py4j.GatewayConnection.run (GatewayConnection.java:238)
At java.lang.Thread.run (Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.apache.hive.hcatalog.data.JsonSerDe
At java.net.URLClassLoader.findClass (URLClassLoader.java:382)
At java.lang.ClassLoader.loadClass (ClassLoader.java:424)
At java.lang.ClassLoader.loadClass (ClassLoader.java:357)
At java.lang.Class.forName0 (Native Method)
At java.lang.Class.forName (Class.java:348)
At org.apache.hadoop.hive.ql.plan.TableDesc.getDeserializerClass (TableDesc.java:71)
... 38 more
Run Spark SQL to The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH.
In the same way, put the driver jar package of jdbc under the jar folder of spark
The above is all the contents of this article "what to do about spark.sql operating hive error report". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.