Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of hive executing spark Task

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you the "hive implementation of spark tasks example analysis", the content is easy to understand, clear, hope to help you solve doubts, the following let the editor lead you to study and learn "hive implementation of spark tasks example analysis" this article.

Public static void main (String [] args) throws Exception {

If (args.length < 1) {

System.err.println ("Usage: JavaWordCount")

System.out.println ("examle:. / bin/spark-submit-name\" WorktrendJob\ "+"-master spark://192.168.0.61:7077-executor-memory 1G "

+ "- class et.theme.vis.job.WorktrendJob spark-1.jar" + "/ data/china/china.txt file:///data/china")

System.exit (1)

}

SimpleDateFormat sdf = new SimpleDateFormat ("yyyy-MM-dd")

DATE_MATH = sdf.format (new Date ())

System.out.println ("- * WorktrendJob*-")

System.out.println ("- -")

System.out.println ("- spark starts calculating -")

/ / job name

SparkConf sparkConf = new SparkConf () .setAppName ("MyCustomerJob")

/ / spark connection

JavaSparkContext ctx = new JavaSparkContext (sparkConf)

/ / create a hive connection

HiveContext hiveContext = new HiveContext (ctx)

/ / mysql configuration

Properties connectionProperties = new Properties ()

ConnectionProperties.setProperty ("user", MYSQL_USER)

ConnectionProperties.setProperty ("", MYSQL_PASSWORD)

/ / query all

DataFrame queryall = queryAll (hiveContext,null)

/ / Registration temporary form

Queryall.registerTempTable ("first")

/ / query temporary table calculation 1

String sql = ""

/ / query calculation 2

String sql1 = ""

/ / convert the calculation results to DataFrame

DataFrame sql_a = hiveContext.sql (sql)

DataFrame sql_b = hiveContext.sql (sql1)

/ / merging 2 DataFrame is equivalent to left join

DataFrame join = sql_a.join (sql_b,sql_b.col (DATE_END) .equalTo (sql_a.col (DATE_END)), "left_outer")

/ / create a table in mysql

Sql_a.write () .mode (SaveMode.Append) .JDBC (MYSQL_JDBC_URL, "test", connectionProperties)

/ / close

Ctx.stop ()

}

Public static DataFrame queryAll (HiveContext hiveContext, String arg) {

String sql = ""

DataFrame queryAll = hiveContext.sql (sql)

/ / query results are converted into RDD abstract datasets

JavaRDD name = queryAll.javaRDD () .map (new Function () {

@ Override

Public WorktrendInfo call (Row v1) throws Exception {

/ / put the RDD abstract dataset into vo.class

CustomerInfo customerInfo = new CustomerInfo ()

CustomerInfo.setCity (v1.getString (0))

Return null;// returns customerInfo

}

});

/ / convert the resulting vo to DataFrame ~ return

DataFrame df = hiveContext.createDataFrame (name, WorktrendInfo.class)

Return df

}

The above is all the contents of the article "sample Analysis of hive performing spark tasks". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report