In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to use IntelliJ IDEA to import the latest Spark source code and compile Spark source code, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can get something.
Preparatory work
First of all, you need to have JDK 1.6 installed and Scala installed on your system. After downloading the latest version of IntelliJ IDEA, first install (you will be recommended to install the Scala plug-in when you open it for the first time). At this point, you should be able to run Scala on the command line on your system. My system environment is as follows:
Mac OS X (10.9.5)
JDK 1.7.71
Scala 2.10.4
IntelliJ IDEA 14
In addition, finally, it is recommended that we first use pre-built 's Spark, understand the operation and use of Spark, write some Spark applications and then read the source code, and try to modify the source code to compile manually.
Import a Spark project from Github
After opening IntelliJ IDEA, select VCS → Check out from Version Control → Git in the menu bar, then fill in the address of the Spark project in Git Repository URL and specify the local path, as shown in the following figure.
Click the Clone in this window and start to clone the project from Github. The process depends on your network speed, which will take about 3-10 minutes.
Compile Spark
When the clone is finished, IntelliJ IDEA will automatically prompt you whether or not to open the corresponding pom.xml file for the project. Select Open the pom.xml file directly here, and the system will automatically resolve the project dependencies, and this step will take different time depending on your network and system environment.
After this step is completed, please manually edit the pom.xml file under the Spark root directory and find the line that specifies the java version (java.version). Depending on your system environment, if you are using jdk1.7, then you may need to change its value to 1.7 (the default is 1.6).
Then open the shell terminal, enter the root directory of the spark project you just imported on the command line, and execute
Sbt/sbt assembly
The compilation command will all use the default configuration to compile Spark. If you want to specify the version of the relevant components, you can check Build-Spark (http://spark.apache.org/docs/latest/building-spark.html) on the official website of Spark to see all the commonly used compilation options. This process does not require VPN to complete, in order to estimate the time required for compilation, you can open a new shell terminal, constantly check the size of the spark project directory, I finally use the default configuration, the spark directory size after successful compilation is 2.0G.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.