In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to use Spark's cache mechanism to observe the improvement of efficiency". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn how to use Spark's cache mechanism to observe the improvement of efficiency.
Use Spark's cache mechanism to observe the improvement in efficiency.
Based on the above, let's execute the following statement:
It is found that the same calculation result is 15. 5%.
At this point, we are entering the Web console:
It is clearly shown in the console that we performed two "count" operations.
Now let's perform a "cache" operation on the variable "sparks":
When you are performing a count operation, view the Web console:
At this time, it is found that the time consuming of the three count operations we performed before and after is 0.7s, 0.3s and 0.5s respectively.
At this point, we perform the count operation for the fourth time and take a look at the effect of the Web console:
The clear fourth operation on the console only cost 17ms, which is about 30 times faster than the previous three operations. This is the huge speed boost brought about by caching, and cache-based computing is one of the core of Spark!
Step 3: build an IDE development environment for Spark
Step 1: currently the preferred InteIIiJ IDE development tool for Spark in the world is IDEA. Let's download InteIIiJ IDEA:
Download here is the latest version of Version 13.1.4:
With regard to the selection of the version, the official basis for the selection is as follows:
Here we choose the "Community Edition FREE" version under the Linux system, which can fully meet our Scala development needs of any degree of complexity.
After the download is completed, save it in the following local location:
Step 2: install IDEA and configure IDEA system environment variables
Create the "/ usr/local/idea" directory:
Unzip the idea package we downloaded to this directory:
After the installation is complete, to facilitate the use of the commands in its bin directory, we configure it at "~ / .bashrc":
Thank you for your reading, the above is the content of "how to use Spark's cache mechanism to observe the improvement of efficiency". After the study of this article, I believe you have a deeper understanding of how to use Spark's cache mechanism to observe the improvement of efficiency, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.