In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to parse the core terms of spark kernel". In the operation process of actual cases, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
Application:
Application is the spark user who created the SparkContext instance object and contains the Driver program:
Spark-shell is an application because spark-shell creates a SparkContext object named sc:
Job:
Corresponding to Spark actions, each action such as count, saveAsTextFile, etc. corresponds to a job instance that contains parallel computation of multiple tasks.
Driver Program:
Program that runs main function and creates SparkContext instance
Cluster Manager:
Cluster resource management external services. There are three cluster resource managers on Spark: standalone, yarn, and mesos. The standalone mode of Spark can meet the requirements of most Spark computing environments for cluster resource management. Yarn and mesos are basically considered only when running multiple sets of computing frameworks in a cluster.
Worker Node:
A worker node in a cluster that can run application code, equivalent to a slave node in Hadoop.
Executor:
A worker process started for an application on a Worker Node assigns tasks to run in the process, and is responsible for storing data in memory or disk. It must be noted that each application will only have one Executor on a Worker Node, and the tasks of the application will be processed concurrently through multithreading within the Executor.
State:
A job can be divided into many tasks, each group of tasks is called state, this MapReduce map and reduce tasks are very similar, the basis for dividing the state is that: the state usually starts due to reading external data or shuffle data, the end of a state is usually due to shuffle (such as reduceByKey operation) or the end of the whole job, such as putting data on storage systems such as hdfs.
"Spark kernel core terminology how to parse" the content of the introduction here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.