In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the relevant knowledge of "how to solve the problem that HBase-Spark can not be compiled and passed in Spark2". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
one
Compilation exception
When you use the hbase-spark dependency package to access HBase in your Spark2 code, the code you write fails to complete the compilation, and the following error is prompted during compilation:
Error:scalac: missing or invalid dependency detected while loading class file 'HBaseContext.class'.
Could not access type Logging in package org.apache.spark,
Because it (or its dependencies) are missing. Check your build definition for
Missing or conflicting dependencies. (Re-run with `- Ylog- classpath` to see the problematic classpath.)
A full rebuild may help if 'HBaseContext.class' was compiled against an incompatible version of org.apache.spark.
Prompt "Could not access type Logging in package org.apache.spark" when running the project compilation command
two
Analysis of problems
After looking at the information, I found a JIRA on HBase's official website, address:
Https://issues.apache.org/jira/browse/HBASE-16428
When it is found in JIRA that HBaseContext is referenced in the code, compiling the Spark application with Spark2 will fail because the HBaseContext module references org.apache.spark.Logging. In Spark2, because Logging is moved to a private package. There is also a JIRA from Spark2 that illustrates the problem:
Https://issues.apache.org/jira/browse/SPARK-13928
three
Problem solving
To complete the compilation, Fayson uses the following methods to solve the problem:
1. Create an org.apache.spark package under your own project
two。 Create a Logging.scala type of type Trait under the org.apache.spark package, and the contents of this class are found through the Spark2 source code
Copy the content of the org.apache.spark.internal.Logging class under the spark-core project to the org.apache.spark.Logging class created under our project.
4. After completing the above operation, execute the compilation again, and the compilation can be completed normally.
four
Summary
1. Through exception analysis, since the Logging class is privatized in Spark2, creating an override Logging class in your own project solves the problem.
The content of the 2.Logging class should be consistent with the corresponding Spark2 version code to avoid causing some inexplicable problems.
This is the end of the content of "how to solve the problem that HBase-Spark cannot be compiled in Spark2". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.