In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Lab environment: linux centOS 6.7vmware virtual machine
Spark-1.5.1-bin-hadoop-2.1.0
Apache-hive-1.2.1
Eclipse or IntelJIDea use eclipse.
Code:
Import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaSparkContext;import org.apache.spark.sql.DataFrame;import org.apache.spark.sql.hive.HiveContext;public class SparkOnHiveDemo {public static void main (String [] args) {/ / first create SparkConf SparkConf conf = new SparkConf () .setAppName ("HiveDataSource") / / create JavaSparkContext JavaSparkContext sc = new JavaSparkContext (conf); / / create HiveContext. Notice here, it takes SparkContext as a parameter, not JavaSparkContext HiveContext hiveContext = new HiveContext (sc.sc ()); / / 1. You can use the sql (xxx statement) under HiveContext to execute the HiveSQL statement / / 1. Delete the table and create the table / / stars_infos, stars_scores hiveContext.sql ("DROP TABLE IF EXISTS stars_infos"); hiveContext.sql ("CREATE TABLE IF NOT EXISTS stars_infos (name STRING,age INT)" + "row format delimited fields terminated by','"); / / 2. Import data hiveContext.sql ("LOAD DATA" + "LOCAL INPATH" + "'/ root/book/stars_infos.txt'" + "INTO TABLE stars_infos") into the table HiveContext.sql ("DROP TABLE IF EXISTS stars_scores"); hiveContext.sql ("CREATE TABLE IF NOT EXISTS stars_scores (name STRING,score INT)" + "row format delimited fields terminated by','") HiveContext.sql ("LOAD DATA" + "LOCAL INPATH" + "/ root/book/stars_score.txt'" + "INTO TABLE stars_scores"); / / 3. Take data from an existing hive table and convert it to DF DataFrame superStarDataFrame = hiveContext.sql ("SELECT si.name,si.age,ss.score" + "FROM stars_infos si" + "JOIN stars_scores ss ON si.name=ss.name" + "WHERE ss.score > = 90") / / 4. Do not confuse hiveContext.sql ("DROP TABLE IF EXISTS superStar"); superStarDataFrame.saveAsTable ("superStar"); / / 5 with registerTemtable by persisting DF data into hive. Get DF hiveContext.table ("superStar") .show (); sc.close ();}} directly from Hive
Metadata:
You can download the attachment and upload it to the specified directory.
Package the program to jar and upload it to the directory specified by linux and write a script. See the text for the attachment to the script. The specific content can be modified.
Just run the script. Of course, make sure that the MySQL database is normal and the hive is normal.
Attachment: http://down.51cto.com/data/2366931
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.