In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Getting started: object SparkSqlTest {def main (args: Array [String]): Unit = {/ / screen redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark") .setLevel (Level.WARN) / / build programming entry Val conf: SparkConf = new SparkConf () conf.setAppName ("SparkSqlTest") .setmaster ("local [2]") val spark: SparkSession = SparkSession.builder () .config (conf) .getOrCreate () / * * Note after spark 2.0: * val sqlContext = new SQLContext (sparkContext) * val hiveContext = new HiveContext (sparkContext) * the main constructor is privatized So here we can only use SparkSession object to create * / / create sqlcontext object val sqlContext: SQLContext = spark.sqlContext / / load data as DataFrame What is loaded here is json data / / data format: {name:'',age:18} val perDF: DataFrame = sqlContext.read.json ("hdfs://zzy/data/person.json") / / View 2D table structure perDF.printSchema () / / View data By default, 20 records perDF.show () / / complex query perDF.select ("name"). Show () / / specify fields to query perDF.select (new Column ("name"), new Column ("age"). > (18). Show () / / specify query conditions to query perDF.select ("name") "age") .where (new Column ("age"). > (18)). Show () / / specify the query condition to query perDF.select ("age"). GroupBy ("age"). Avg ("age") / / aggregation operation}}
If you don't know much about the introductory case, the following step-by-step introduction:
(1) conversion between RDD/DataSet//DataFrame/list
There are two ways to convert to DataFrame/DataSet through RDD:
-convert RDD or external collections to dataframe/datasets by reflection
-external collections or RDD are programmatically dynamically converted to dataframe or dataset
Note: if dataFrame corresponds to java bean, if dataSet corresponds to case class
Convert RDD or external collections to dataframe/datasets by reflection
Data preparation:
Case class Student (name:String, birthday:String, province:String) val stuList = List (new Student ("Wei xx", "1998-11-11", "Shanxi"), new Student ("Wu xx", "1999-06-08", "Henan"), new Student ("Qi xx", "2000-03-08", "Shandong"), new Student ("Wang xx", "1997-07-09", "Anhui") New Student (Xue xx, 2002-08-09, Liaoning))
List-- > DataFrame:
/ / block redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark") .setLevel (Level.WARN) / / build programming entry val conf: SparkConf = new SparkConf () conf.setAppName ("SparkSqlTest") .setMaster ("local [2]") .set ("spark.serializer" "org.apache.spark.serializer.KryoSerializer") .registerKryoClasses (Array) val spark: SparkSession = SparkSession.builder (). Config (conf) .getOrCreate () / / create the sqlcontext object val sqlContext: SQLContext = spark.sqlContext / * list--- > DataFrame * convert the scala collection to the java collection * / val javaList: util.List [Student] = JavaConversions.seqAsJavaList (stuList) val stuDF: DataFrame = sqlContext.createDataFrame (javaList ClassOf [Student]) val count = stuDF.count () println (count)
RDD-- > DataFrame:
/ / block redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark") .setLevel (Level.WARN) / / build programming entry val conf: SparkConf = new SparkConf () conf.setAppName ("SparkSqlTest") .setMaster ("local [2]") .set ("spark.serializer" "org.apache.spark.serializer.KryoSerializer") .registerKryoClasses (Array) val spark: SparkSession = SparkSession.builder (). Config (conf) .getOrCreate () / create sqlcontext object val sqlContext: SQLContext = spark.sqlContext / / create sparkContext val sc: SparkContext = spark.sparkContext / * * RDD--- > DataFrame * / val stuRDD: RDD [Student] = sc.makeRDD (stuList) val stuDF: DataFrame = sqlContext.createDataFrame (stuRDD ClassOf [Student]) val count = stuDF.count () println (count)
List-- > DataSet:
/ / block redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark") .setLevel (Level.WARN) / / build programming entry val conf: SparkConf = new SparkConf () conf.setAppName ("SparkSqlTest") .setMaster ("local [2]") .set ("spark.serializer" "org.apache.spark.serializer.KryoSerializer") .registerKryoClasses (Array) val spark: SparkSession = SparkSession.builder (). Config (conf) .getOrCreate () / create sqlcontext object val sqlContext: SQLContext = spark.sqlContext / / create sparkContext val sc: SparkContext = spark.sparkContext / * * list--- > DataSet * / / if you create a Dataset, you must import the following implicit transformation import spark.implicits._ val stuDF: Dataset [Student] = sqlContext.createDataset (stuList) stuDF.createTempView ("student") / / query using the full sql statement Using reflection, only Dataset can do it, but not dataFrame. Val sql= "" | select * from student "" .stripMargin spark.sql (sql) .show ()
RDD-- > DataSet:
/ / block redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark") .setLevel (Level.WARN) / / build programming entry val conf: SparkConf = new SparkConf () conf.setAppName ("SparkSqlTest") .setMaster ("local [2]") .set ("spark.serializer" "org.apache.spark.serializer.KryoSerializer") .registerKryoClasses (Array) val spark: SparkSession = SparkSession.builder (). Config (conf) .getOrCreate () / create sqlcontext object val sqlContext: SQLContext = spark.sqlContext / / create sparkContext val sc: SparkContext = spark.sparkContext / * * RDD--- > DataSet * / / if you create a Dataset, you must import the following implicit transformation import spark.implicits._ val stuRDD: RDD [Student] = sc.makeRDD (stuList) val stuDF: Dataset [Student] = sqlContext.createDataset (stuRDD) stuDF.createTempView ("student") / / to query with a complete sql statement Using reflection, only Dataset can, but dataFrame cannot val sql= "" | select * from student "" .stripMargin spark.sql (sql) .stripMargin spark.sql (sql). Show () dynamically converts external collections or RDD into dataframe or dataset.
List-- > DataFrame:
/ / block redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark") .setLevel (Level.WARN) / / build programming entry val conf: SparkConf = new SparkConf () conf.setAppName ("SparkSqlTest") .setMaster ("local [2]") .set ("spark.serializer" "org.apache.spark.serializer.KryoSerializer") .registerKryoClasses (Array) val spark: SparkSession = SparkSession.builder (). Config (conf) .getOrCreate () / create sqlcontext object val sqlContext: SQLContext = spark.sqlContext / / create sparkContext val sc: SparkContext = spark.sparkContext / / list-DataFrame / / 1. Convert all the elements in list to Row val RowList: List [Row] = stuList.map (item = > {Row (item.name, item.birthday, item.province)}) / / 2. Build metadata val schema=StructType (List (StructField ("name", DataTypes.StringType), StructField ("birthday", DataTypes.StringType), StructField ("province", DataTypes.StringType)) / / convert the collection of scala to java collection val javaList = JavaConversions.seqAsJavaList (RowList) val stuDF = spark.createDataFrame (javaList) Schema) stuDF.createTempView ("student") / / query using the complete sql statement Using dynamic programming, both Dataset and dataFrame can val sql= "" | select * from student "" .stripMargin spark.sql (sql) .show ()
RDD-- > DataFrame:
/ / block redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark") .setLevel (Level.WARN) / / build programming entry val conf: SparkConf = new SparkConf () conf.setAppName ("SparkSqlTest") .setMaster ("local [2]") .set ("spark.serializer" "org.apache.spark.serializer.KryoSerializer") .registerKryoClasses (Array) val spark: SparkSession = SparkSession.builder (). Config (conf) .getOrCreate () / create the sqlcontext object val sqlContext: SQLContext = spark.sqlContext / / create sparkContext val sc: SparkContext = spark.sparkContext / / RDD-DataFrame / / change the Element is converted to Row val RowRDD: RDD [Row] = sc.makeRDD (stuList) .map (item = > {Row (item.name) Item.birthday, item.province)}) / / 2. Build metadata val schema=StructType (List (StructField ("name", DataTypes.StringType), StructField ("birthday", DataTypes.StringType), StructField ("province", DataTypes.StringType)) val stuDF = spark.createDataFrame (RowRDD,schema) stuDF.createTempView ("student") / / query using complete sql statements, using dynamic programming Both Dataset and dataFrame can val sql= "" | select * from student "" .stripMargin spark.sql (sql) .show ()
Since building a DataFrame is exactly the same as building a DataSet, there is no demonstration here
(2) how spark SQL loads data / / shields redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark") .setLevel (Level.WARN) / / build programming entry val conf: SparkConf = new SparkConf () Conf.setAppName ("SparkSqlTest") .setMaster ("local [2]") val spark: SparkSession = SparkSession.builder () .config (conf) .getOrCreate () / create sqlcontext object val sqlContext: SQLContext = spark.sqlContext / / create sparkContext val sc: SparkContext = spark.sparkContext / / previous version load: parquet file SqlContext.load ("hdfs://zzy/hello.parquet") / / load json data sqlContext.read.json ("hdfs://zzy/hello.json") / / load normal file sqlContext.read.text ("hdfs://zzy/hello.txt") / / load csv sqlContext.read.csv ("hdfs://zy/hello.csv") / / read the data of jdbc val url= "jdbc:mysql://localhost:3306/hello" val properties=new Properties () properties.setProperty ("user" "root") properties.setProperty ("password", "123456") val tableName= "book" sqlContext.read.jdbc (url,tableName Properties) (3) method of landing spark SQL data / / shielding redundant logs Logger.getLogger ("org.apache.hadoop") .setLevel (Level.WARN) Logger.getLogger ("org.apache.spark") .setLevel (Level.WARN) Logger.getLogger ("org.project-spark"). SetLevel (Level.WARN) / / build programming entry val conf: SparkConf = new SparkConf ( ) conf.setAppName ("SparkSqlTest") .setMaster ("local [2]") val spark: SparkSession = SparkSession.builder () .config (conf) .getOrCreate () / create sqlcontext object val sqlContext: SQLContext = spark.sqlContext / / create sparkContext val sc: SparkContext = spark.sparkContext val testFD: DataFrame = sqlContext.read.text (" Hdfs://zzy/hello.txt ") / / write to the normal file testFD.write.format (" json ") / / in what format. Mode (SaveMode.Append) / / write mode .save (" hdfs://zzy/hello.json ") / / the location of the file written / / written to the database val Url= "jdbc:mysql://localhost:3306/hello" val table_name= "book" val prots=new Properties () prots.put ("user" "root") prots.put ("password", "123456") testFD.write.mode (SaveMode.Append) .JDBC (url,table_name,prots)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.