In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
How to analyze the time dimension of Apache Spark data modeling, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.
Data modeling is one of the important components of data analysis, and the correct establishment of the model will help users to better answer business-related questions. In the past few decades, data modeling technology has also been the foundation of SQL data warehouse.
Apache Spark as the representative of the new generation of data warehouse technology, we can use the early data modeling technology in Spark. This makes Spark data pineline more efficient. Next I'll discuss the different data modeling in spark.
Multiple date columns
Single-date columns are common in many datasets. Some datasets may need to analyze data for multiple date columns. Then the strategy discussed in the previous article is not enough. Therefore, we need to extend the date dimension logic to accommodate multiple date columns.
Add the issue date to the stock data
The following code adds a date column named issue_date to the stock data to simulate a scenario with multiple dates.
Val appleStockDfWithIssueDate = appleStockDf. WithColumn ("issue_date", add_months (appleStockDf ("Date"),-12))
Now, if the user wants to analyze based on the date column that represents the transaction date and the release date column that indicates when a given stock will be issued, then we need to use multiple date dimensions.
Date dimension with new prefix
In order to analyze multiple dates, we need to connect the date dimension multiple times. We need to create views using data dimensions with different prefixes so that we can do the same thing.
Val issueDateSchema = StructType (dateDf.schema.fields.map (value = > value.copy (name = "issue_" + value.name)
Val issueDf = sparkSession.createDataFrame (dateDf.rdd, issueDateSchema)
In the above code, we create a new df named issueDf, which adds a prefix named issue to all columns, indicating that the date dimension is combined into issue_date.
Three-way join
Once we have prepared the new date dimension, we can now connect the two dates in the stock data.
Val twoJoinDf = appleStockDfWithIssueDate.join (dateDf, appleStockDfWithIssueDate.col ("Date") = dateDf.col ("full_date_formatted")) .join (issueDf, appleStockDfWithIssueDate.col ("issue_date") = issueDf.col ("issue_full_date_formatted")) release date analysis
Once we have made the connection, we can analyze the release date as follows
TwoJoinDf.groupBy ("issue_year", "issue_quarter"). Avg ("Close"). Sort ("issue_year", "issue_quarter") .show () is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.