In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "how spark sql connects and uses mysql data sources". In daily operation, I believe many people have doubts about how spark sql connects and uses mysql data sources. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts about "how spark sql connects and uses mysql data sources". Next, please follow the editor to study!
Spark sql can connect to the database through standard jdbc to obtain data sources.
Public class SparkSql {public static SimpleDateFormat sdf = new SimpleDateFormat ("_ yyyyMMdd_HH_mm_ss"); private static final String appName = "spark sql test"; private static final String master = "spark://192.168.1.21:7077"; private static final String JDBCURL = "jdbc:mysql://192.168.1.18:3306/lng?user=root&password=123456"; public static void main (String [] avgs) {SparkContext context = new SparkContext (master, appName) SQLContext sqlContext = new SQLContext (context); / / Creates a DataFrame based on a table named "people" / / stored in a MySQL database. DataFrame df = sqlContext .read () .format ("jdbc") .option ("url", JDBCURL) .option ("dbtable", "tsys_user") .load (); / / Looks the schema of this DataFrame. Df.printSchema (); / / Counts people by age DataFrame countsByAge = df.groupBy ("customStyle"). Count (); countsByAge.show (); / / Saves countsByAge to S3 in the JSON format. CountsByAge.write () .format ("json") .save ("hdfs://192.168.1.17:9000/administrator/sql-result" + sdf.format (new Date ();}}
If there is no driver that includes mysql, you need to refer to http://stackoverflow.com/questions/34764505/no-suitable-driver-found-for-jdbc-in-spark
You might want to assembly you application with your build manager (Maven,SBT) thus you'll not need to add the dependecies in your spark-submit cli. (it means to package the mysql driver into a jar package submitted to spark)
You can use the following option in your spark-submit cli: (change it to the following, tested, feasible, or add export SPARK_CLASSPATH=$SPARK_CLASSPATH:/usr/local/spark-1.6.1-bin-hadoop2.6/conf/driverLib/mysql-connector-java-5.1.36.jar to conf/spark-env.sh)
Spark-submit-driver-class-path / usr/local/spark-1.6.1-bin-hadoop2.6/conf/driverLib/mysql-connector-java-5.1.36.jar-class com.xxx.SparkSql / usr/local/spark.jar
Explanation: Supposing that you have all your jars in a lib directory in your project root, this will read all the libraries and add them to the application submit.
You can also try to configure these 2 variables: spark.driver.extraClassPath and spark.executor.extraClassPath in SPARK_HOME/conf/spark-default.conf file and specify the value of these variables as the path of the jar file. Ensure that the same path exists on workernodes. (tested, no)
At this point, the study on "how spark sql connects and uses mysql data sources" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.