Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What should spark do when getting a lzo compressed file and reporting an error java.lang.ClassNotFoundException?

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This issue of the content of the editor will bring you about spark to get lzo compressed file error java.lang.ClassNotFoundException how to do, the article is rich in content and professional analysis and description for you, after reading this article, I hope you can get something.

Well, it's a question that I've never paid attention to before, but let's write it down today.

Configuration information

Hadoop core-site.xml configuration

Io.compression.codecs org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.LzmaCodec io.compression.codec.lzo.class com.hadoop.compression.lzo.LzoCodec 12345678910

Io compression codec is lzo.

Spark-env.sh configuration

Export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/:/home/cluster/apps/hadoop/share/hadoop/common/:/home/cluster/apps/hadoop/share/hadoop/common/lib/:/ Home/cluster/apps/hadoop/share/hadoop/hdfs/:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/:/home/cluster/apps/hadoop/share/hadoop/mapreduce/:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/:/home/cluster/apps/hadoop/share/hadoop/tools/lib/:/home/cluster/apps/spark/spark-1.4.1/lib/123 operation information

Start spark-shell

Execute the following code

Val lzoFile = sc.textFile ("/ tmp/data/lzo/part-m-00000.lzo") lzoFile.count12 specific error information java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf (ReflectionUtils.java:109) at org.apache.hadoop.util.ReflectionUtils.setConf (ReflectionUtils.java:75) at org.apache.hadoop.util.ReflectionUtils.newInstance (ReflectionUtils.java:133) at org.apache .spark.rdd.HadoopRDD.getInputFormat (HadoopRDD.scala:190) at org.apache.spark.rdd.HadoopRDD.getPartitions (HadoopRDD.scala:203) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:217) at scala.Option.getOrElse (Option.scala:120) at org.apache.spark.rdd.RDD.partitions (RDD. Scala:217) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions (MapPartitionsRDD.scala:32) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:217) at scala.Option.getOrElse (Option.scala:120) at org.apache.spark.rdd.RDD.partitions (RDD.scala:217) at org. Apache.spark.rdd.MapPartitionsRDD.getPartitions (MapPartitionsRDD.scala:32) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:217) at scala.Option.getOrElse (Option.scala:120) at org.apache.spark.rdd.RDD.partitions (RDD.scala:217) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions (MapPartitionsRDD .Scala: 32) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply (RDD.scala:217) at scala.Option.getOrElse (Option.scala:120) at org.apache.spark.rdd.RDD.partitions (RDD.scala:217) at org.apache.spark.SparkContext.runJob (SparkContext.scala:1781) at org.apache $anonfun$collect$1.apply (RDD.scala:885) at org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope (RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope (RDD.scala:286) at org.apache.spark.rdd.RDD.collect (RDD.scala:884) at org.apache.spark.sql. Execution.SparkPlan.executeCollect (SparkPlan.scala:105) at org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult (HiveContext.scala:503) at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run (AbstractSparkSQLDriver.scala:58) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd (SparkSQLCLIDriver.scala:283) at org.apache.hadoop.hive.cli.CliDriver.processLine (CliDriver.java:423) At org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main (SparkSQLCLIDriver.scala:218) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main (SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java At org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain (SparkSubmit.scala:665) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1 (SparkSubmit.scala:170) at org.apache.spark.deploy.SparkSubmit$.submit (SparkSubmit.scala:193) at org.apache.spark.deploy.SparkSubmit$.main (SparkSubmit.scala:112) at org.apache.spark.deploy.SparkSubmit.main (SparkSubmit.scala) Caused by: java .lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf (ReflectionUtils.java:106)... 45 more Caused by: java.lang .IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found. At org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses (CompressionCodecFactory.java:135) at org.apache.hadoop.io.compress.CompressionCodecFactory. (CompressionCodecFactory.java:175) at org.apache.hadoop.mapred.TextInputFormat.configure (TextInputFormat.java:45)... 50 more Caused by: java.lang.ClassNotFoundException: Class com.hadoop.compression.lzo.LzoCodec not found at org.apache.hadoop.conf.Configuration.getClassByName (Configuration.java: 1803) at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses (CompressionCodecFactory.java:128)... 52 more 1234567891011121314151617181920212223242627282930313234353638394041424454647484950515354555657585960616263 then how to solve it?

Later, I was a little suspicious of the hadoop core-site.xml configuration format, and then asked my colleague to follow up the hadoop source code for me. I'm sure it wasn't the hadoop problem.

Then I thought about it, and I encountered a similar problem before. This is how I configured spark-env.sh.

Export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh6.1.0/lib/native/Linux-amd64-64/*:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh6.1.0/share/hadoop/common/hadoop-lzo-0.4.15-cdh6.1.0.jar:/home/stark_summer/opt/spark/spark-1.3.1-bin-hadoop2. 3/lib/*export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/stark_summer/opt/hadoop/hadoop-2.3.0-cdh6.1.0/share/hadoop/common/hadoop-lzo-0.4.15-cdh6.1.0.jar:/home/stark_summer/opt/spark/spark-1.3.1-bin-hadoop2.3/lib/*12

This configuration is the problem of fix before, but it was a long time ago, which I have already forgotten, so this is the advantage of blogging on weekdays, recording all the problems encountered every time.

Huh? If I specify a specific .jar package, that's fine, but do I have to use * to specify all the jar in a directory in spark? Well, this is really different from hadoop. In hadoop, we need to specify jar packages under a certain directory, all of which are / xxx/yyy/lib/.

Spark must require / xxx/yyy/lib/*, to load the jar package in this directory, otherwise the error as above will be included.

Modified spark-env.sh configuration file export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/nativeexport SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/*:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/*:/home/cluster/apps/hadoop/share/hadoop/common/*:/home/ Cluster/apps/hadoop/share/hadoop/common/lib/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/*:/home/cluster/apps/hadoop/share/hadoop/tools/lib/*:/home/cluster/apps/spark/spark-1.4.1/lib/*123

There will be no problem when the above code is executed again

But...

If I change / home/cluster/apps/hadoop/lib/native to / home/cluster/apps/hadoop/lib/native/*

Export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/native/*export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/home/cluster/apps/hadoop/lib/native/*export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/cluster/apps/hadoop/share/hadoop/yarn/*:/home/cluster/apps/hadoop/share/hadoop/yarn/lib/*:/home/cluster/apps/hadoop/share/hadoop/common/*:/home/cluster/apps/hadoop/share / hadoop/common/lib/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/*:/home/cluster/apps/hadoop/share/hadoop/hdfs/lib/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/*:/home/cluster/apps/hadoop/share/hadoop/mapreduce/lib/*:/home/cluster/apps/hadoop/share/hadoop/tools/lib/*:/home/cluster/apps/spark/spark-1.4.1/lib/*123

Nima will report an error as follows:

Spark.repl.class.uri= http://10.32.24.78:52753) error [Ljava.lang.StackTraceElement 4efb0b1f2015-09-11 17 main 52V 02357 ERROR [main] spark.SparkContext (Logging.scala:logError (96))-Error initializing SparkContext.java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance (NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance (DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance (Constructor.java:526) at org.apache.spark.io.CompressionCodec$. CreateCodec (CompressionCodec.scala:68) at org.apache.spark.io.CompressionCodec$.createCodec (CompressionCodec.scala:60) at org.apache.spark.scheduler.EventLoggingListener. (EventLoggingListener.scala:69) at org.apache.spark.SparkContext. (SparkContext.scala:513) at org.apache.spark.repl.SparkILoop.createSparkContext (SparkILoop.scala:1017) at $line3.$read$$iwC$$iwC. (9) at $line3.$read$$iwC. (: 18) at $line3.$read. 20) at $line3.$read$. (: 24) at $line3.$read$. () at $line3.$eval$. (: 7) at $line3.$eval$. () at $line3.$eval.$print () at sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) At java.lang.reflect.Method.invoke (Method.java:606) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call (SparkIMain.scala:1065) at org.apache.spark.repl.SparkIMain$Request.loadAndRun (SparkIMain.scala:1338) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1 (SparkIMain.scala:840) at org.apache.spark.repl.SparkIMain.interpret (SparkIMain.scala:871) at org.apache.spark .repl.SparkIMain.interpret (SparkIMain.scala:819) at org.apache.spark.repl.SparkILoop.reallyInterpret$1 (SparkILoop.scala:857) at org.apache.spark.repl.SparkILoop.interpretStartingWith (SparkILoop.scala:902) at org.apache.spark.repl.SparkILoop.command (SparkILoop.scala:814) at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply (SparkILoopInit.scala:123) at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply ( SparkILoopInit.scala:122) at org.apache.spark.repl.SparkIMain.beQuietDuring (SparkIMain.scala:324) at org.apache.spark.repl.SparkILoopInit$class.initializeSpark (SparkILoopInit.scala:122) at org.apache.spark.repl.SparkILoop.initializeSpark (SparkILoop.scala:64) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1 $$anonfun$apply$mcZ$sp$5.apply$mcV$sp (SparkILoop.scala:974) at org.apache.spark.repl.SparkILoopInit$class.runThunks (SparkILoopInit.scala:157 ) at org.apache.spark.repl.SparkILoop.runThunks (SparkILoop.scala:64) at org.apache.spark.repl.SparkILoopInit$class.postInitialization (SparkILoopInit.scala:106) at org.apache.spark.repl.SparkILoop.postInitialization (SparkILoop.scala:64) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp (SparkILoop.scala:991) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply (SparkILoop.scala:945) at org.apache .spark.repl.SparkILoop $$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply (SparkILoop.scala:945) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader (ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process (SparkILoop.scala:945) at org.apache.spark.repl.SparkILoop.process (SparkILoop.scala:1059) at org.apache.spark.repl.Main$.main (Main.scala:31) at org.apache.spark.repl.Main.main ( Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethod) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke (Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain (SparkSubmit.scala:665) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1 (SparkSubmit.scala : 170,170) at org.apache.spark.deploy.SparkSubmit$.submit (SparkSubmit.scala:193) at org.apache.spark.deploy.SparkSubmit$.main (SparkSubmit.scala:112) at org.apache.spark.deploy.SparkSubmit.main (SparkSubmit.scala) Caused by: java.lang.IllegalArgumentException at org.apache.spark.io.SnappyCompressionCodec. (CompressionCodec.scala:155)... 56 more1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162 above is the spark that the editor shares with you. What should I do if I get the lzo compressed file and report the error java.lang.ClassNotFoundException? If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

  • Http post request header is to

    Open the conf directory of tomcat, find server.xml, edit the place where you can find the port number, and add the following content

    © 2024 shulou.com SLNews company. All rights reserved.

    12
    Report