In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Remote operation of Spark with Idea under WIn7
Main.scalaorg.apache.spark.SparkContext._org.apache.spark. {SparkConfSparkContext} SogouResult {(args:Array []) {(args.length==) {System..println () System. ()} conf=SparkConf (). SetAppName (). SetMaster () sc=SparkContext (conf) rdd1=sc.textFile (args ()). Map (_ .length ()). Filter (_ .length = =) rdd2=rdd1.map (x = > (x (). ReduceByKey ( _ + _) .map (x = > (x._2x._1)) .sortByKey () .map (x = > (x._2x._1)) rdd2.saveAsTextFile (args () sc.stop}} fs.defaultFShdfs://192.168.0.3:90004.0.0HdfsTestHdfsTest1.0-SNAPSHOT12 13 org.apache.hadoop14 hadoop-common15 2.6.416 17 18 org.apache.hadoop19 hadoop-mapreduce-client-jobclient20 2.6.421 22 23 commons-cli24 commons-cli25 1.226 27 28 29 30 ${project.artifactId} 31
The running parameters are as follows:
Hdfs://192.168.0.3:9000/input/SogouQ1 hdfs://192.168.0.3:9000/output/sogou1
Error:
"C:\ Program Files\ Java\ jdk1.7.0_79\ bin\ java"-Didea.launcher.port=7535-Didea.launcher.bin.path=D:\ Java\ IntelliJ\ bin-Dfile.encoding=UTF-8-classpath "C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ charsets.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ deploy.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ access-bridge-64.jar C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ dnsns.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ jaccess.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ localedata.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ sunec.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ sunjce_provider.jar C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ sunmscapi.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ zipfs.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ javaws.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ jce.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ jfr.jar C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ jfxrt.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ jsse.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ management-agent.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ plugin.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ resources.jar C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ rt.jar;D:\ scalasrc\ HdfsTest\ target\ classes;D:\ scalasrc\ lib\ datanucleus-core-3.2.10.jar;D:\ scalasrc\ lib\ datanucleus-rdbms-3.2.9.jar;D:\ scalasrc\ lib\ spark-1.5.0-yarn-shuffle.jar;D:\ scalasrc\ lib\ datanucleus-api-jdo-3.2.6.jar D:\ scalasrc\ lib\ spark-assembly-1.5.0-hadoop2.6.0.jar;D:\ scalasrc\ lib\ spark-examples-1.5.0-hadoop2.6.0.jar;D:\ Java\ scala210\ lib\ scala-actors-migration.jar;D:\ Java\ scala210\ lib\ scala-actors.jar;D:\ Java\ scala210\ lib\ scala-library.jar;D:\ Java\ scala210\ lib\ scala-reflect.jar;D:\ Java\ scala210\ lib\ scala-swing.jar D:\ Java\ IntelliJ\ lib\ idea_rt.jar "com.intellij.rt.execution.application.AppMain main.scala.SogouResult hdfs://192.168.0.3:9000/input/SogouQ1 hdfs://192.168.0.3:9000/output/sogou1
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jarVera fileveGrane DarRelpedia ScalasrcUniplex libUniverse SparkMurassemblyMuy1.5.0Muhamahadoop2.6.0.jarveledorgUnipledslf4jbank implplingStaticLoggerBinder.class]
SLF4J: Found binding in [jarVera fileVlGN DVERGER Scalasrc Universe libUniverse SPAKMAE examplesMei1.5.0 Muhammad Hadoop2.6.0.JARVERGUBINDUBINder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16-09-16 12:00:43 INFO SparkContext: Running Spark version 1.5.0
16-09-16 12:00:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
16-09-16 12:00:44 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
Java.io.IOException: Could not locate executable null\ bin\ winutils.exe in the Hadoop binaries.
At org.apache.hadoop.util.Shell.getQualifiedBinPath (Shell.java:355)
At org.apache.hadoop.util.Shell.getWinUtilsPath (Shell.java:370)
At org.apache.hadoop.util.Shell. (Shell.java:363)
At org.apache.hadoop.util.StringUtils. (StringUtils.java:79)
At org.apache.hadoop.security.Groups.parseStaticMapping (Groups.java:104)
At org.apache.hadoop.security.Groups. (Groups.java:86)
At org.apache.hadoop.security.Groups. (Groups.java:66)
At org.apache.hadoop.security.Groups.getUserToGroupsMappingService (Groups.java:280)
At org.apache.hadoop.security.UserGroupInformation.initialize (UserGroupInformation.java:271)
At org.apache.hadoop.security.UserGroupInformation.ensureInitialized (UserGroupInformation.java:248)
At org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject (UserGroupInformation.java:763)
At org.apache.hadoop.security.UserGroupInformation.getLoginUser (UserGroupInformation.java:748)
At org.apache.hadoop.security.UserGroupInformation.getCurrentUser (UserGroupInformation.java:621)
At org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply (Utils.scala:2084)
At org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply (Utils.scala:2084)
At scala.Option.getOrElse (Option.scala:120)
At org.apache.spark.util.Utils$.getCurrentUserName (Utils.scala:2084)
At org.apache.spark.SparkContext. (SparkContext.scala:310)
At main.scala.SogouResult$.main (SogouResult.scala:16)
At main.scala.SogouResult.main (SogouResult.scala)
At sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod)
At sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)
At sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
At java.lang.reflect.Method.invoke (Method.java:606)
At com.intellij.rt.execution.application.AppMain.main (AppMain.java:144)
16-09-16 12:00:44 INFO SecurityManager: Changing view acls to: danger
16-09-16 12:00:44 INFO SecurityManager: Changing modify acls to: danger
12:00:44 on 16-09-16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set (danger); users with modify permissions: Set (danger)
16-09-16 12:00:45 INFO Slf4jLogger: Slf4jLogger started
16-09-16 12:00:45 INFO Remoting: Starting remoting
16-09-16 12:00:45 INFO Remoting: Remoting started; listening on addresses: [akka.tcp://sparkDriver@192.168.0.2:55944]
12:00:45 on 16-09-16 INFO Utils: Successfully started service 'sparkDriver' on port 55944.
16-09-16 12:00:45 INFO SparkEnv: Registering MapOutputTracker
16-09-16 12:00:45 INFO SparkEnv: Registering BlockManagerMaster
12:00:45 on 16-09-16 INFO DiskBlockManager: Created local directory at C:\ Users\ danger\ AppData\ Local\ Temp\ blockmgr-281e23a9-a059-4670-a1b0-0511e63c55a3
16-09-16 12:00:45 INFO MemoryStore: MemoryStore started with capacity 481.1 MB
12:00:45 on 16-09-16 INFO HttpFileServer: HTTP File server directory is C:\ Users\ danger\ AppData\ Local\ Temp\ spark-84f74e01-9ea2-437c-b532-a5cfec898bc8\ httpd-876c9027-ebb3-44c6-8256-bd4a555eaeaf
16-09-16 12:00:45 INFO HttpServer: Starting HTTP Server
12:00:46 on 16-09-16 INFO Utils: Successfully started service 'HTTP file server' on port 55945.
16-09-16 12:00:46 INFO SparkEnv: Registering OutputCommitCoordinator
12:00:46 on 16-09-16 INFO Utils: Successfully started service 'SparkUI' on port 4040.
12:00:46 on 16-09-16 INFO SparkUI: Started SparkUI at http://192.168.0.2:4040
16-09-16 12:00:46 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16-09-16 12:00:46 INFO Executor: Starting executor ID driver on host localhost
12:00:46 on 16-09-16 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55964.
16-09-16 12:00:46 INFO NettyBlockTransferService: Server created on 55964
16-09-16 12:00:46 INFO BlockManagerMaster: Trying to register BlockManager
12:00:46 on 16-09-16 INFO BlockManagerMasterEndpoint: Registering block manager localhost:55964 with 481.1 MB RAM, BlockManagerId (driver, localhost, 55964)
16-09-16 12:00:46 INFO BlockManagerMaster: Registered BlockManager
12:00:47 on 16-09-16 INFO MemoryStore: ensureFreeSpace (157320) called with curMem=0, maxMem=504511856
12:00:47 on 16-09-16 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 153.6 KB, free 481.0 MB)
12:00:47 on 16-09-16 INFO MemoryStore: ensureFreeSpace (14301) called with curMem=157320, maxMem=504511856
12:00:47 on 16-09-16 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.0 KB, free 481.0 MB)
12:00:47 on 16-09-16 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:55964 (size: 14.0 KB, free: 481.1 MB)
16-09-16 12:00:47 INFO SparkContext: Created broadcast 0 from textFile at SogouResult.scala:18
16-09-16 12:00:48 WARN: Your hostname, danger-PC resolves to a loopback/non-reachable address: fe80:0:0:0:0:5efe:ac1b:2301%24, but we couldn't find any external IP address!
Exception in thread "main" java.net.ConnectException: Call From danger-PC/192.168.0.2 to 192.168.0.3 main 9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
At sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
At sun.reflect.NativeConstructorAccessorImpl.newInstance (NativeConstructorAccessorImpl.java:57)
At sun.reflect.DelegatingConstructorAccessorImpl.newInstance (DelegatingConstructorAccessorImpl.java:45)
At java.lang.reflect.Constructor.newInstance (Constructor.java:526)
At org.apache.hadoop.net.NetUtils.wrapWithMessage (NetUtils.java:791)
At org.apache.hadoop.net.NetUtils.wrapException (NetUtils.java:731)
At org.apache.hadoop.ipc.Client.call (Client.java:1472)
At org.apache.hadoop.ipc.Client.call (Client.java:1399)
At org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke (ProtobufRpcEngine.java:232)
At com.sun.proxy.$Proxy19.getFileInfo (Unknown Source)
At org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo (ClientNamenodeProtocolTranslatorPB.java:752)
At sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod)
At sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)
At sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
At java.lang.reflect.Method.invoke (Method.java:606)
At org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (RetryInvocationHandler.java:187)
At org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (RetryInvocationHandler.java:102)
At com.sun.proxy.$Proxy20.getFileInfo (Unknown Source)
At org.apache.hadoop.hdfs.DFSClient.getFileInfo (DFSClient.java:1988)
At org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall (DistributedFileSystem.java:1118)
At org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall (DistributedFileSystem.java:1114)
At org.apache.hadoop.fs.FileSystemLinkResolver.resolve (FileSystemLinkResolver.java:81)
At org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus (DistributedFileSystem.java:1114)
At org.apache.hadoop.fs.Globber.getFileStatus (Globber.java:57)
At org.apache.hadoop.fs.Globber.glob (Globber.java:252)
At org.apache.hadoop.fs.FileSystem.globStatus (FileSystem.java:1644)
At org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus (FileInputFormat.java:257)
At org.apache.hadoop.mapred.FileInputFormat.listStatus (FileInputFormat.java:228)
At org.apache.hadoop.mapred.FileInputFormat.getSplits (FileInputFormat.java:313)
At org.apache.spark.rdd.HadoopRDD.getPartitions (HadoopRDD.scala:207)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:239)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:237)
At scala.Option.getOrElse (Option.scala:120)
At org.apache.spark.rdd.RDD.partitions (RDD.scala:237)
At org.apache.spark.rdd.MapPartitionsRDD.getPartitions (MapPartitionsRDD.scala:35)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:239)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:237)
At scala.Option.getOrElse (Option.scala:120)
At org.apache.spark.rdd.RDD.partitions (RDD.scala:237)
At org.apache.spark.rdd.MapPartitionsRDD.getPartitions (MapPartitionsRDD.scala:35)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:239)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:237)
At scala.Option.getOrElse (Option.scala:120)
At org.apache.spark.rdd.RDD.partitions (RDD.scala:237)
At org.apache.spark.rdd.MapPartitionsRDD.getPartitions (MapPartitionsRDD.scala:35)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:239)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:237)
At scala.Option.getOrElse (Option.scala:120)
At org.apache.spark.rdd.RDD.partitions (RDD.scala:237)
At org.apache.spark.rdd.MapPartitionsRDD.getPartitions (MapPartitionsRDD.scala:35)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:239)
At org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:237)
At scala.Option.getOrElse (Option.scala:120)
At org.apache.spark.rdd.RDD.partitions (RDD.scala:237)
At org.apache.spark.Partitioner$.defaultPartitioner (Partitioner.scala:65)
At org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply (PairRDDFunctions.scala:290)
At org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply (PairRDDFunctions.scala:290)
At org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:147)
At org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:108)
At org.apache.spark.rdd.RDD.withScope (RDD.scala:306)
At org.apache.spark.rdd.PairRDDFunctions.reduceByKey (PairRDDFunctions.scala:289)
At main.scala.SogouResult$.main (SogouResult.scala:19)
At main.scala.SogouResult.main (SogouResult.scala)
At sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod)
At sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)
At sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
At java.lang.reflect.Method.invoke (Method.java:606)
At com.intellij.rt.execution.application.AppMain.main (AppMain.java:144)
Caused by: java.net.ConnectException: Connection refused: no further information
At sun.nio.ch.SocketChannelImpl.checkConnect (Native Method)
At sun.nio.ch.SocketChannelImpl.finishConnect (SocketChannelImpl.java:739)
At org.apache.hadoop.net.SocketIOWithTimeout.connect (SocketIOWithTimeout.java:206)
At org.apache.hadoop.net.NetUtils.connect (NetUtils.java:530)
At org.apache.hadoop.net.NetUtils.connect (NetUtils.java:494)
At org.apache.hadoop.ipc.Client$Connection.setupConnection (Client.java:607)
At org.apache.hadoop.ipc.Client$Connection.setupIOstreams (Client.java:705)
At org.apache.hadoop.ipc.Client$Connection.access$2800 (Client.java:368)
At org.apache.hadoop.ipc.Client.getConnection (Client.java:1521)
At org.apache.hadoop.ipc.Client.call (Client.java:1438)
... 61 more
12:00:50 on 16-09-16 INFO SparkContext: Invoking stop () from shutdown hook
12:00:50 on 16-09-16 INFO SparkUI: Stopped Spark web UI at http://192.168.0.2:4040
16-09-16 12:00:50 INFO DAGScheduler: Stopping DAGScheduler
16-09-16 12:00:51 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16-09-16 12:00:51 INFO MemoryStore: MemoryStore cleared
16-09-16 12:00:51 INFO BlockManager: BlockManager stopped
16-09-16 12:00:51 INFO BlockManagerMaster: BlockManagerMaster stopped
16-09-16 12:00:51 INFO SparkContext: Successfully stopped SparkContext
16-09-16 12:00:51 INFO ShutdownHookManager: Shutdown hook called
12:00:51 on 16-09-16 INFO ShutdownHookManager: Deleting directory C:\ Users\ danger\ AppData\ Local\ Temp\ spark-84f74e01-9ea2-437c-b532-a5cfec898bc8
Process finished with exit code 1
It hasn't been done for a long time. I think there should be no winutils.exe.
Decisively find one from the Internet and put it under hadoop/bin
Execution, no more, but still unable to connect
Telnet 192.168.0.3 9000, indicating that unable to connect
That should be the problem.
In the configuration file of hadoop, the hostname is configured
Change all hostnames to IP addresses
"C:\ Program Files\ Java\ jdk1.7.0_79\ bin\ java"-Didea.launcher.port=7536-Didea.launcher.bin.path=D:\ Java\ IntelliJ\ bin-Dfile.encoding=UTF-8-classpath "C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ charsets.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ deploy.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ access-bridge-64.jar C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ dnsns.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ jaccess.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ localedata.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ sunec.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ sunjce_provider.jar C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ sunmscapi.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ ext\ zipfs.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ javaws.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ jce.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ jfr.jar C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ jfxrt.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ jsse.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ management-agent.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ plugin.jar;C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ resources.jar C:\ Program Files\ Java\ jdk1.7.0_79\ jre\ lib\ rt.jar;D:\ scalasrc\ HdfsTest\ target\ classes;D:\ scalasrc\ lib\ datanucleus-core-3.2.10.jar;D:\ scalasrc\ lib\ datanucleus-rdbms-3.2.9.jar;D:\ scalasrc\ lib\ spark-1.5.0-yarn-shuffle.jar;D:\ scalasrc\ lib\ datanucleus-api-jdo-3.2.6.jar D:\ scalasrc\ lib\ spark-assembly-1.5.0-hadoop2.6.0.jar;D:\ scalasrc\ lib\ spark-examples-1.5.0-hadoop2.6.0.jar;D:\ Java\ scala210\ lib\ scala-actors-migration.jar;D:\ Java\ scala210\ lib\ scala-actors.jar;D:\ Java\ scala210\ lib\ scala-library.jar;D:\ Java\ scala210\ lib\ scala-reflect.jar;D:\ Java\ scala210\ lib\ scala-swing.jar D:\ Java\ IntelliJ\ lib\ idea_rt.jar "com.intellij.rt.execution.application.AppMain main.scala.SogouResult hdfs://192.168.0.3:9000/input/SogouQ1 hdfs://192.168.0.3:9000/output/sogou1
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jarVera fileveGrane DarRelpedia ScalasrcUniplex libUniverse SparkMurassemblyMuy1.5.0Muhamahadoop2.6.0.jarveledorgUnipledslf4jbank implplingStaticLoggerBinder.class]
SLF4J: Found binding in [jarVera fileVlGN DVERGER Scalasrc Universe libUniverse SPAKMAE examplesMei1.5.0 Muhammad Hadoop2.6.0.JARVERGUBINDUBINder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16-09-16 14:04:45 INFO SparkContext: Running Spark version 1.5.0
16-09-16 14:04:46 INFO SecurityManager: Changing view acls to: danger
16-09-16 14:04:46 INFO SecurityManager: Changing modify acls to: danger
14:04:46 on 16-09-16 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set (danger); users with modify permissions: Set (danger)
16-09-16 14:04:47 INFO Slf4jLogger: Slf4jLogger started
16-09-16 14:04:47 INFO Remoting: Starting remoting
16-09-16 14:04:47 INFO Remoting: Remoting started; listening on addresses: [akka.tcp://sparkDriver@192.168.0.2:51172]
14:04:47 on 16-09-16 INFO Utils: Successfully started service 'sparkDriver' on port 51172.
16-09-16 14:04:47 INFO SparkEnv: Registering MapOutputTracker
16-09-16 14:04:47 INFO SparkEnv: Registering BlockManagerMaster
14:04:47 on 16-09-16 INFO DiskBlockManager: Created local directory at C:\ Users\ danger\ AppData\ Local\ Temp\ blockmgr-087e9166-2258-4f45-b449-d184c92702a3
16-09-16 14:04:47 INFO MemoryStore: MemoryStore started with capacity 481.1 MB
14:04:47 on 16-09-16 INFO HttpFileServer: HTTP File server directory is C:\ Users\ danger\ AppData\ Local\ Temp\ spark-0d6662f5-0bfa-4e6f-a256-c97bc6ce5f47\ httpd-a2355600-9a68-417d-bd52-2ccdcac7bb13
16-09-16 14:04:47 INFO HttpServer: Starting HTTP Server
14:04:48 on 16-09-16 INFO Utils: Successfully started service 'HTTP file server' on port 51173.
16-09-16 14:04:48 INFO SparkEnv: Registering OutputCommitCoordinator
14:04:48 on 16-09-16 INFO Utils: Successfully started service 'SparkUI' on port 4040.
14:04:48 on 16-09-16 INFO SparkUI: Started SparkUI at http://192.168.0.2:4040
16-09-16 14:04:48 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16-09-16 14:04:48 INFO Executor: Starting executor ID driver on host localhost
14:04:48 on 16-09-16 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51192.
16-09-16 14:04:48 INFO NettyBlockTransferService: Server created on 51192
16-09-16 14:04:48 INFO BlockManagerMaster: Trying to register BlockManager
14:04:48 on 16-09-16 INFO BlockManagerMasterEndpoint: Registering block manager localhost:51192 with 481.1 MB RAM, BlockManagerId (driver, localhost, 51192)
16-09-16 14:04:48 INFO BlockManagerMaster: Registered BlockManager
14:04:49 on 16-09-16 INFO MemoryStore: ensureFreeSpace (157320) called with curMem=0, maxMem=504511856
14:04:49 on 16-09-16 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 153.6 KB, free 481.0 MB)
14:04:49 on 16-09-16 INFO MemoryStore: ensureFreeSpace (14301) called with curMem=157320, maxMem=504511856
14:04:49 on 16-09-16 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.0 KB, free 481.0 MB)
14:04:49 on 16-09-16 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:51192 (size: 14.0 KB, free: 481.1 MB)
16-09-16 14:04:49 INFO SparkContext: Created broadcast 0 from textFile at SogouResult.scala:18
16-09-16 14:04:50 WARN: Your hostname, danger-PC resolves to a loopback/non-reachable address: fe80:0:0:0:0:5efe:ac1b:2301%24, but we couldn't find any external IP address!
16-09-16 14:04:52 INFO FileInputFormat: Total input paths to process: 1
16-09-16 14:04:52 INFO SparkContext: Starting job: sortByKey at SogouResult.scala:19
14:04:52 on 16-09-16 INFO DAGScheduler: Registering RDD 4 (map at SogouResult.scala:19)
14:04:52 on 16-09-16 INFO DAGScheduler: Got job 0 (sortByKey at SogouResult.scala:19) with 2 output partitions
14:04:52 on 16-09-16 INFO DAGScheduler: Final stage: ResultStage 1 (sortByKey at SogouResult.scala:19)
14:04:52 on 16-09-16 INFO DAGScheduler: Parents of final stage: List (ShuffleMapStage 0)
14:04:52 on 16-09-16 INFO DAGScheduler: Missing parents: List (ShuffleMapStage 0)
14:04:52 on 16-09-16 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD [4] at map at SogouResult.scala:19), which has no missing parents
14:04:52 on 16-09-16 INFO MemoryStore: ensureFreeSpace (4208) called with curMem=171621, maxMem=504511856
14:04:52 INFO MemoryStore on 16-09-16: Block broadcast_1 stored as values in memory (estimated size 4.1KB, free 481.0 MB)
14:04:52 on 16-09-16 INFO MemoryStore: ensureFreeSpace (2347) called with curMem=175829, maxMem=504511856
14:04:52 on 16-09-16 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KB, free 481.0 MB)
14:04:52 on 16-09-16 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:51192 (size: 2.3KB, free: 481.1 MB)
16-09-16 14:04:52 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
14:04:52 on 16-09-16 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD [4] at map at SogouResult.scala:19)
16-09-16 14:04:52 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
14:04:52 on 16-09-16 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, ANY, 2135 bytes)
14:04:53 on 16-09-16 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16-09-16 14:04:53 INFO HadoopRDD: Input split: hdfs://192.168.0.3:9000/input/SogouQ1:0+134217728
16-09-16 14:04:53 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16-09-16 14:04:53 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16-09-16 14:04:53 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16-09-16 14:04:53 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16-09-16 14:04:53 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
14:04:54 on 16-09-16 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
Java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V
At org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (Native Method)
At org.apache.hadoop.util.NativeCrc32.verifyChunkedSums (NativeCrc32.java:59)
At org.apache.hadoop.util.DataChecksum.verifyChunkedSums (DataChecksum.java:301)
At org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket (RemoteBlockReader2.java:216)
At org.apache.hadoop.hdfs.RemoteBlockReader2.read (RemoteBlockReader2.java:146)
At org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead (DFSInputStream.java:693)
At org.apache.hadoop.hdfs.DFSInputStream.readBuffer (DFSInputStream.java:749)
At org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy (DFSInputStream.java:806)
At org.apache.hadoop.hdfs.DFSInputStream.read (DFSInputStream.java:847)
At java.io.DataInputStream.read (DataInputStream.java:100)
At org.apache.hadoop.util.LineReader.fillBuffer (LineReader.java:180)
At org.apache.hadoop.util.LineReader.readDefaultLine (LineReader.java:216)
At org.apache.hadoop.util.LineReader.readLine (LineReader.java:174)
At org.apache.hadoop.mapred.LineRecordReader.skipUtfByteOrderMark (LineRecordReader.java:206)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:244)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:47)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:248)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:216)
At org.apache.spark.util.NextIterator.hasNext (NextIterator.scala:71)
At org.apache.spark.InterruptibleIterator.hasNext (InterruptibleIterator.scala:39)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$14.hasNext (Iterator.scala:388)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At org.apache.spark.util.collection.ExternalSorter.insertAll (ExternalSorter.scala:203)
At org.apache.spark.shuffle.sort.SortShuffleWriter.write (SortShuffleWriter.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:41)
At org.apache.spark.scheduler.Task.run (Task.scala:88)
At org.apache.spark.executor.Executor$TaskRunner.run (Executor.scala:214)
At java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145)
At java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:615)
At java.lang.Thread.run (Thread.java:745)
16-09-16 14:04:54 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread [Executor task launch worker-0,5,main]
Java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V
At org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (Native Method)
At org.apache.hadoop.util.NativeCrc32.verifyChunkedSums (NativeCrc32.java:59)
At org.apache.hadoop.util.DataChecksum.verifyChunkedSums (DataChecksum.java:301)
At org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket (RemoteBlockReader2.java:216)
At org.apache.hadoop.hdfs.RemoteBlockReader2.read (RemoteBlockReader2.java:146)
At org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead (DFSInputStream.java:693)
At org.apache.hadoop.hdfs.DFSInputStream.readBuffer (DFSInputStream.java:749)
At org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy (DFSInputStream.java:806)
At org.apache.hadoop.hdfs.DFSInputStream.read (DFSInputStream.java:847)
At java.io.DataInputStream.read (DataInputStream.java:100)
At org.apache.hadoop.util.LineReader.fillBuffer (LineReader.java:180)
At org.apache.hadoop.util.LineReader.readDefaultLine (LineReader.java:216)
At org.apache.hadoop.util.LineReader.readLine (LineReader.java:174)
At org.apache.hadoop.mapred.LineRecordReader.skipUtfByteOrderMark (LineRecordReader.java:206)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:244)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:47)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:248)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:216)
At org.apache.spark.util.NextIterator.hasNext (NextIterator.scala:71)
At org.apache.spark.InterruptibleIterator.hasNext (InterruptibleIterator.scala:39)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$14.hasNext (Iterator.scala:388)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At org.apache.spark.util.collection.ExternalSorter.insertAll (ExternalSorter.scala:203)
At org.apache.spark.shuffle.sort.SortShuffleWriter.write (SortShuffleWriter.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:41)
At org.apache.spark.scheduler.Task.run (Task.scala:88)
At org.apache.spark.executor.Executor$TaskRunner.run (Executor.scala:214)
At java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145)
At java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:615)
At java.lang.Thread.run (Thread.java:745)
14:04:54 on 16-09-16 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, ANY, 2135 bytes)
14:04:54 on 16-09-16 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
14:04:54 on 16-09-16 INFO SparkContext: Invoking stop () from shutdown hook
14:04:54 on 16-09-16 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V
At org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (Native Method)
At org.apache.hadoop.util.NativeCrc32.verifyChunkedSums (NativeCrc32.java:59)
At org.apache.hadoop.util.DataChecksum.verifyChunkedSums (DataChecksum.java:301)
At org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket (RemoteBlockReader2.java:216)
At org.apache.hadoop.hdfs.RemoteBlockReader2.read (RemoteBlockReader2.java:146)
At org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead (DFSInputStream.java:693)
At org.apache.hadoop.hdfs.DFSInputStream.readBuffer (DFSInputStream.java:749)
At org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy (DFSInputStream.java:806)
At org.apache.hadoop.hdfs.DFSInputStream.read (DFSInputStream.java:847)
At java.io.DataInputStream.read (DataInputStream.java:100)
At org.apache.hadoop.util.LineReader.fillBuffer (LineReader.java:180)
At org.apache.hadoop.util.LineReader.readDefaultLine (LineReader.java:216)
At org.apache.hadoop.util.LineReader.readLine (LineReader.java:174)
At org.apache.hadoop.mapred.LineRecordReader.skipUtfByteOrderMark (LineRecordReader.java:206)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:244)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:47)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:248)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:216)
At org.apache.spark.util.NextIterator.hasNext (NextIterator.scala:71)
At org.apache.spark.InterruptibleIterator.hasNext (InterruptibleIterator.scala:39)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$14.hasNext (Iterator.scala:388)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At org.apache.spark.util.collection.ExternalSorter.insertAll (ExternalSorter.scala:203)
At org.apache.spark.shuffle.sort.SortShuffleWriter.write (SortShuffleWriter.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:41)
At org.apache.spark.scheduler.Task.run (Task.scala:88)
At org.apache.spark.executor.Executor$TaskRunner.run (Executor.scala:214)
At java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145)
At java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:615)
At java.lang.Thread.run (Thread.java:745)
16-09-16 14:04:54 INFO HadoopRDD: Input split: hdfs://192.168.0.3:9000/input/SogouQ1:134217728+17788332
16-09-16 14:04:54 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
14:04:54 on 16-09-16 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)
Java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V
At org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (Native Method)
At org.apache.hadoop.util.NativeCrc32.verifyChunkedSums (NativeCrc32.java:59)
At org.apache.hadoop.util.DataChecksum.verifyChunkedSums (DataChecksum.java:301)
At org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket (RemoteBlockReader2.java:216)
At org.apache.hadoop.hdfs.RemoteBlockReader2.read (RemoteBlockReader2.java:146)
At org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead (DFSInputStream.java:693)
At org.apache.hadoop.hdfs.DFSInputStream.readBuffer (DFSInputStream.java:749)
At org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy (DFSInputStream.java:806)
At org.apache.hadoop.hdfs.DFSInputStream.read (DFSInputStream.java:847)
At java.io.DataInputStream.read (DataInputStream.java:100)
At org.apache.hadoop.util.LineReader.fillBuffer (LineReader.java:180)
At org.apache.hadoop.util.LineReader.readDefaultLine (LineReader.java:216)
At org.apache.hadoop.util.LineReader.readLine (LineReader.java:174)
At org.apache.hadoop.mapred.LineRecordReader. (LineRecordReader.java:134)
At org.apache.hadoop.mapred.TextInputFormat.getRecordReader (TextInputFormat.java:67)
At org.apache.spark.rdd.HadoopRDD$$anon$1. (HadoopRDD.scala:239)
At org.apache.spark.rdd.HadoopRDD.compute (HadoopRDD.scala:216)
At org.apache.spark.rdd.HadoopRDD.compute (HadoopRDD.scala:101)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala:38)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala:38)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala:38)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala:38)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:41)
At org.apache.spark.scheduler.Task.run (Task.scala:88)
At org.apache.spark.executor.Executor$TaskRunner.run (Executor.scala:214)
At java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145)
At java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:615)
At java.lang.Thread.run (Thread.java:745)
16-09-16 14:04:54 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread [Executor task launch worker-1,5,main]
Java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V
At org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (Native Method)
At org.apache.hadoop.util.NativeCrc32.verifyChunkedSums (NativeCrc32.java:59)
At org.apache.hadoop.util.DataChecksum.verifyChunkedSums (DataChecksum.java:301)
At org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket (RemoteBlockReader2.java:216)
At org.apache.hadoop.hdfs.RemoteBlockReader2.read (RemoteBlockReader2.java:146)
At org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead (DFSInputStream.java:693)
At org.apache.hadoop.hdfs.DFSInputStream.readBuffer (DFSInputStream.java:749)
At org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy (DFSInputStream.java:806)
At org.apache.hadoop.hdfs.DFSInputStream.read (DFSInputStream.java:847)
At java.io.DataInputStream.read (DataInputStream.java:100)
At org.apache.hadoop.util.LineReader.fillBuffer (LineReader.java:180)
At org.apache.hadoop.util.LineReader.readDefaultLine (LineReader.java:216)
At org.apache.hadoop.util.LineReader.readLine (LineReader.java:174)
At org.apache.hadoop.mapred.LineRecordReader. (LineRecordReader.java:134)
At org.apache.hadoop.mapred.TextInputFormat.getRecordReader (TextInputFormat.java:67)
At org.apache.spark.rdd.HadoopRDD$$anon$1. (HadoopRDD.scala:239)
At org.apache.spark.rdd.HadoopRDD.compute (HadoopRDD.scala:216)
At org.apache.spark.rdd.HadoopRDD.compute (HadoopRDD.scala:101)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala:38)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala:38)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala:38)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.rdd.MapPartitionsRDD.compute (MapPartitionsRDD.scala:38)
At org.apache.spark.rdd.RDD.computeOrReadCheckpoint (RDD.scala:297)
At org.apache.spark.rdd.RDD.iterator (RDD.scala:264)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:41)
At org.apache.spark.scheduler.Task.run (Task.scala:88)
At org.apache.spark.executor.Executor$TaskRunner.run (Executor.scala:214)
At java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145)
At java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:615)
At java.lang.Thread.run (Thread.java:745)
14:04:54 on 16-09-16 INFO TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1) on executor localhost: java.lang.UnsatisfiedLinkError (org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V) [duplicate 1]
16-09-16 14:04:54 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16-09-16 14:04:54 INFO TaskSchedulerImpl: Cancelling stage 0
14:04:54 on 16-09-16 INFO SparkUI: Stopped Spark web UI at http://192.168.0.2:4040
14:04:54 on 16-09-16 INFO DAGScheduler: ShuffleMapStage 0 (map at SogouResult.scala:19) failed in 1.350 s
16-09-16 14:04:54 INFO DAGScheduler: Job 0 failed: sortByKey at SogouResult.scala:19, took 1.693803 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V
At org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (Native Method)
At org.apache.hadoop.util.NativeCrc32.verifyChunkedSums (NativeCrc32.java:59)
At org.apache.hadoop.util.DataChecksum.verifyChunkedSums (DataChecksum.java:301)
At org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket (RemoteBlockReader2.java:216)
At org.apache.hadoop.hdfs.RemoteBlockReader2.read (RemoteBlockReader2.java:146)
At org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead (DFSInputStream.java:693)
At org.apache.hadoop.hdfs.DFSInputStream.readBuffer (DFSInputStream.java:749)
At org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy (DFSInputStream.java:806)
At org.apache.hadoop.hdfs.DFSInputStream.read (DFSInputStream.java:847)
At java.io.DataInputStream.read (DataInputStream.java:100)
At org.apache.hadoop.util.LineReader.fillBuffer (LineReader.java:180)
At org.apache.hadoop.util.LineReader.readDefaultLine (LineReader.java:216)
At org.apache.hadoop.util.LineReader.readLine (LineReader.java:174)
At org.apache.hadoop.mapred.LineRecordReader.skipUtfByteOrderMark (LineRecordReader.java:206)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:244)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:47)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:248)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:216)
At org.apache.spark.util.NextIterator.hasNext (NextIterator.scala:71)
At org.apache.spark.InterruptibleIterator.hasNext (InterruptibleIterator.scala:39)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$14.hasNext (Iterator.scala:388)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At org.apache.spark.util.collection.ExternalSorter.insertAll (ExternalSorter.scala:203)
At org.apache.spark.shuffle.sort.SortShuffleWriter.write (SortShuffleWriter.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:41)
At org.apache.spark.scheduler.Task.run (Task.scala:88)
At org.apache.spark.executor.Executor$TaskRunner.run (Executor.scala:214)
At java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145)
At java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:615)
At java.lang.Thread.run (Thread.java:745)
Driver stacktrace:
At org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages (DAGScheduler.scala:1280)
At org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply (DAGScheduler.scala:1268)
At org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply (DAGScheduler.scala:1267)
At scala.collection.mutable.ResizableArray$class.foreach (ResizableArray.scala:59)
At scala.collection.mutable.ArrayBuffer.foreach (ArrayBuffer.scala:47)
At org.apache.spark.scheduler.DAGScheduler.abortStage (DAGScheduler.scala:1267)
At org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply (DAGScheduler.scala:697)
At org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply (DAGScheduler.scala:697)
At scala.Option.foreach (Option.scala:236)
At org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed (DAGScheduler.scala:697)
At org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive (DAGScheduler.scala:1493)
At org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive (DAGScheduler.scala:1455)
At org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive (DAGScheduler.scala:1444)
At org.apache.spark.util.EventLoop$$anon$1.run (EventLoop.scala:48)
At org.apache.spark.scheduler.DAGScheduler.runJob (DAGScheduler.scala:567)
At org.apache.spark.SparkContext.runJob (SparkContext.scala:1813)
At org.apache.spark.SparkContext.runJob (SparkContext.scala:1826)
At org.apache.spark.SparkContext.runJob (SparkContext.scala:1839)
At org.apache.spark.SparkContext.runJob (SparkContext.scala:1910)
At org.apache.spark.rdd.RDD$$anonfun$collect$1.apply (RDD.scala:905)
At org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:147)
At org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:108)
At org.apache.spark.rdd.RDD.withScope (RDD.scala:306)
At org.apache.spark.rdd.RDD.collect (RDD.scala:904)
At org.apache.spark.RangePartitioner$.sketch (Partitioner.scala:264)
At org.apache.spark.RangePartitioner. (Partitioner.scala:126)
At org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply (OrderedRDDFunctions.scala:62)
At org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply (OrderedRDDFunctions.scala:61)
At org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:147)
At org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:108)
At org.apache.spark.rdd.RDD.withScope (RDD.scala:306)
At org.apache.spark.rdd.OrderedRDDFunctions.sortByKey (OrderedRDDFunctions.scala:61)
At main.scala.SogouResult$.main (SogouResult.scala:19)
At main.scala.SogouResult.main (SogouResult.scala)
At sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod)
At sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57)
At sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
At java.lang.reflect.Method.invoke (Method.java:606)
At com.intellij.rt.execution.application.AppMain.main (AppMain.java:144)
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V
At org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (Native Method)
At org.apache.hadoop.util.NativeCrc32.verifyChunkedSums (NativeCrc32.java:59)
At org.apache.hadoop.util.DataChecksum.verifyChunkedSums (DataChecksum.java:301)
At org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket (RemoteBlockReader2.java:216)
At org.apache.hadoop.hdfs.RemoteBlockReader2.read (RemoteBlockReader2.java:146)
At org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead (DFSInputStream.java:693)
At org.apache.hadoop.hdfs.DFSInputStream.readBuffer (DFSInputStream.java:749)
At org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy (DFSInputStream.java:806)
At org.apache.hadoop.hdfs.DFSInputStream.read (DFSInputStream.java:847)
At java.io.DataInputStream.read (DataInputStream.java:100)
At org.apache.hadoop.util.LineReader.fillBuffer (LineReader.java:180)
At org.apache.hadoop.util.LineReader.readDefaultLine (LineReader.java:216)
At org.apache.hadoop.util.LineReader.readLine (LineReader.java:174)
At org.apache.hadoop.mapred.LineRecordReader.skipUtfByteOrderMark (LineRecordReader.java:206)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:244)
At org.apache.hadoop.mapred.LineRecordReader.next (LineRecordReader.java:47)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:248)
At org.apache.spark.rdd.HadoopRDD$$anon$1.getNext (HadoopRDD.scala:216)
At org.apache.spark.util.NextIterator.hasNext (NextIterator.scala:71)
At org.apache.spark.InterruptibleIterator.hasNext (InterruptibleIterator.scala:39)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At scala.collection.Iterator$$anon$14.hasNext (Iterator.scala:388)
At scala.collection.Iterator$$anon$11.hasNext (Iterator.scala:327)
At org.apache.spark.util.collection.ExternalSorter.insertAll (ExternalSorter.scala:203)
At org.apache.spark.shuffle.sort.SortShuffleWriter.write (SortShuffleWriter.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:73)
At org.apache.spark.scheduler.ShuffleMapTask.runTask (ShuffleMapTask.scala:41)
At org.apache.spark.scheduler.Task.run (Task.scala:88)
At org.apache.spark.executor.Executor$TaskRunner.run (Executor.scala:214)
At java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1145)
At java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:615)
At java.lang.Thread.run (Thread.java:745)
16-09-16 14:04:54 INFO DAGScheduler: Stopping DAGScheduler
16-09-16 14:04:54 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16-09-16 14:04:54 INFO MemoryStore: MemoryStore cleared
16-09-16 14:04:54 INFO BlockManager: BlockManager stopped
16-09-16 14:04:54 INFO BlockManagerMaster: BlockManagerMaster stopped
16-09-16 14:04:54 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16-09-16 14:04:54 INFO SparkContext: Successfully stopped SparkContext
16-09-16 14:04:54 INFO ShutdownHookManager: Shutdown hook called
14:04:54 on 16-09-16 INFO ShutdownHookManager: Deleting directory C:\ Users\ danger\ AppData\ Local\ Temp\ spark-0d6662f5-0bfa-4e6f-a256-c97bc6ce5f47
16-09-16 14:04:54 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16-09-16 14:04:54 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
Process finished with exit code 50
Hehe, it shows progress.
16-09-16 14:04:54 INFO HadoopRDD: Input split: hdfs://192.168.0.3:9000/input/SogouQ1:134217728+17788332
16-09-16 14:04:54 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Continue with the error as follows:
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums (IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;JZ) V
The question of Baidu:
Http://blog.csdn.net/glad_xiao/article/details/48825391
It is said to be the official document, the official document HADOOP-11064.
By looking at the description, it is clear that this is an error caused by the mismatch between the Spark version and the Hadoop version. The person who encounters this error is to download the precompiled binary bin file from the Spark official website.
So there are two solutions:
1. Redownload and configure the corresponding Hadoop version precompiled by Spark
two。 Download the Spark source code from the official website and compile it according to the pre-installed Hadoop version (after all, the configuration of Spark is much easier than that of Hadoop)
I read another article:
Http://www.cnblogs.com/marost/p/4372778.html
It is also mentioned that the post-hadoop2.6.4 is not compatible with the previous one, so it is decisively downloaded in CSDN.
Http://download.csdn.net/detail/ylhlly/9485201
Hadoop2.6 plug-in package (hadoop.dll,winutils.exe) for windows64 bit platform
Replace, execute
Error:
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=danger, access=WRITE, inode= "/ output": dyq:supergroup:drwxr-xr-x
At org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission (FSPermissionChecker.java:271)
At org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check (FSPermissionChecker.java:257)
At org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check (FSPermissionChecker.java:238)
At org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission (FSPermissionChecker.java:179)
At org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission (FSNamesystem.java:6545)
At org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission (FSNamesystem.java:6527)
At org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess (FSNamesystem.java:6479)
At org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal (FSNamesystem.java:4290)
At org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt (FSNamesystem.java:4260)
Continue Baidu, problem
Dyq@ubuntu:/opt/hadoop-2.6.4$ hadoop fs-chmod 777 / input
Dyq@ubuntu:/opt/hadoop-2.6.4$ hadoop fs-chmod 777 / output
No, go on.
Http://www.cnblogs.com/fang-s/p/3777784.html
This problem occurs when using eclipse locally to write a file to hdfs.
Solution: add the following code to hdfs-site.xml
Dfs.permissions false If "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories.
Restart after modification
I finally saw it.
Process finished with exit code 0
Http://192.168.0.3:50070/explorer.html#/output
Check the file system of Hadoop and successfully qualify sogou2.
Haha, it took a day to fix it.
It seems that big data's cost is very high.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.