In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Win7 using Idea remote connection spark execution spark pi, my own experiment
Win7 address is 192.168.0.2, ubuntu is virtual machine, address is 192.168.0.3
Remote connection spark
The source code language is:
package main.scala.sogou/** * Created by danger on 2016/9/16. */import org.apache.spark.SparkContext._ import org.apache.spark. {SparkConf,SparkContext}object RemoteDebug { def main(args: Array[String]) { val conf = new SparkConf().setAppName("Spark Pi").setMaster("spark://192.168.0.3:7077") .setJars(List("D:\\scalasrc\\out\\artifacts\\scalasrc.jar")) val spark = new SparkContext(conf) val slices = if (args.length > 0) args(0).toInt else 2 val n = 100000 * slices val count = spark.parallelize(1 to n, slices).map { i => val x = Math.random * 2 - 1 val y = Math.random * 2 - 1 if (x * x + y * y < 1) 1 else 0 }.reduce(_ + _) println("Pi is roughly " + 4.0 * count / n) spark.stop() }} The place you need to modify is spark://192.168.0.3:7077 and the address of setJars: D: \\scalasrc\\out\\artifacts\\scalasrc.jar
Also, I don't have a Math package for Import spark.
So random uses Math. random
In the edit configuration interface of run, configure parameters
Seventy is mainly mainclass ha
main.scala.sogou.RemoteDebug
stand-alone run
The remote server made a lot of money, and after a pause, it appeared.
Process finished with exit code 0
Hehe, it worked
But what happened?
Online search, it's here.
16/09/16 09:40:57 INFO DAGScheduler: ResultStage 0 (reduce at RemoteDebug.scala:19) finished in 75.751 s
16/09/16 09:40:57 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/09/16 09:40:57 INFO DAGScheduler: Job 0 finished: reduce at RemoteDebug.scala:19, took 76.071948 s
Pi is roughly 3.1385
16/09/16 09:40:57 INFO SparkUI: Stopped Spark web UI at http://192.168.0.2:4040
16/09/16 09:40:57 INFO DAGScheduler: Stopping DAGScheduler
16/09/16 09:40:57 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/09/16 09:40:57 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/09/16 09:40:57 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/09/16 09:40:57 INFO MemoryStore: MemoryStore cleared
It's so cool, it's a success, see blog.csdn.net/javastart/article/details/43372977
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.