In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you the example analysis of spark operating mode, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
Local mode
The easiest way to run Spark is through the Local mode (that is, pseudo-distributed mode).
Run the command as follows:. / bin/run-example org.apache.spark.examples.SparkPi local
Spark Architecture and Job execution process based on standalone
In Standalone mode, the cluster starts up with Master and Worker, where Master is responsible for receiving jobs submitted by the client and managing Worker. Provides Web to show cluster and job information.
There are two ways to submit a job: Driver (the master of the job, responsible for parsing the job, generating stage and scheduling task to, including DAGScheduler) runs on Worker, and Driver runs on the client. Next, the principles of job operation in two ways are introduced respectively.
Driver runs on Worker
The job is executed through the org.apache.spark.deploy.Client class, and the job execution command is as follows:
. / bin/spark-class org.apache.spark.deploy.Client launch spark://host:port file:///jar_url org.apache.spark.examples.SparkPi spark://host:port
The job execution flow is shown in figure 1.
Figure 1
Job execution process description:
The client submits a job to Master
Master asks a Worker to start Driver, or SchedulerBackend. Worker creates a DriverRunner thread, and DriverRunner starts the SchedulerBackend process.
In addition, Master also asks the rest of the Worker to start Exeuctor, that is, ExecutorBackend. Worker creates an ExecutorRunner thread, and ExecutorRunner starts the ExecutorBackend process.
When ExecutorBackend starts, it registers with the SchedulerBackend of Driver. The SchedulerBackend process contains DAGScheduler, which generates an execution plan and schedules execution according to the user program. For each stage's task, it is stored in TaskScheduler, and when ExecutorBackend reports to SchedulerBackend, it dispatches the task in TaskScheduler to ExecutorBackend for execution.
The assignment ends after all the stage is completed.
Driver runs on the client side
Execute the Spark job directly, and the job running command is as follows (example):
. / bin/run-example org.apache.spark.examples.SparkPi spark://host:port
The job execution flow is shown in figure 2.
Figure 2
Job execution process description:
After the client starts, directly runs the user program, starts the Driver related work: DAGScheduler and BlockManagerMaster and so on.
The client's Driver registers with Master.
Master also asks Worker to start Exeuctor. Worker creates an ExecutorRunner thread, and ExecutorRunner starts the ExecutorBackend process.
When ExecutorBackend starts, it registers with the SchedulerBackend of Driver. Driver parses the DAGScheduler job and generates the corresponding Stage, and the Task contained in each Stage is assigned to the Executor for execution through TaskScheduler.
The assignment ends after all the stage is completed.
Spark Architecture and Job execution process based on Yarn
Here Spark AppMaster is equivalent to SchedulerBackend,Executor in Standalone mode, which is equivalent to standalone's ExecutorBackend,spark AppMaster, including DAGScheduler and YarnClusterScheduler.
The above is all the contents of the article "sample Analysis of spark running Mode". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.