In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
In 1 Standalone mode
After the fully distributed cluster is deployed according to the document of Xiangpiaoye, submit the task to the Spark cluster and check hadoop01:8080. If you want to click to view the history of a completed application, the following prompt appears:
Event logging is not enabledNo event logs were found for this application! To enable event logging, set spark.eventLog.enabled to true and spark.eventLog.dir to the directory to which your event logs are written.
Obviously, you need to follow the prompts for the relevant configuration. Stop the Spark service first, and then add the following configuration to the spark-defaults.conf configuration file in the conf directory:
Spark.eventLog.enabled truespark.eventLog.dir hdfs://ns1/logs/spark
However, you need to create the relevant directories in hdfs in advance, synchronize the configuration files to each node, and then restart the Spark cluster.
Resubmit the task:
. / spark-submit-standalone.sh spark-process-1.0-SNAPSHOT.jar cn.xpleaf.spark.scala.core.p1._01SparkWordCountOps
Then you can finish recording and viewing the Spark log in standalone mode.
2. 2. 1 must know common sense in Yarn mode
When using Spark on Yarn to execute Spark applications, you only need to configure the environment of Spark on the node, and there is no need to start the master or Worker node of Spark, because the final program runs on the Hadoop cluster and is scheduled by Yarn, which needs to be clear.
In this case, when you submit a task to Yarn, you can find the executed application through the address of ResourceManager. For example, if RM is started on a hadoop02 node, you can access its Application page through hadoop02:8088 by default. But if you want to view the detailed execution of Spark programs in the previous standalone mode, there is nothing you can do. Even if you start Hadoop's history-server, you can only view the logs log information.
At this point, you need to start Spark's own log server and use it to view the Spark details of the executing application.
The description of this article is based on Spark 1.6.2, and later versions of Spark may be different.
2.2 Log (history) server configuration, startup and principle
On the node where Spark is currently installed, go to the conf directory and add the following configuration in the configuration file spark-defaults.conf:
Spark.eventLog.enabled true # enable logging spark.eventLog.dir hdfs://ns1/logs/spark # location where logs are saved spark.history.fs.logDirectory hdfs://ns1/logs/spark # location where historical logs are saved
The configuration of the first two is to ensure that when the Spark program is executed, the complete log information is saved and saved to the specified location, and the last configuration is to indicate where to read the relevant log information when starting the application history server of Spark, and to display the log information in the form of the same Web UI as the standalone mode.
After the configuration is complete, use the following command to start the history server:
. / sbin/start-history-server.sh
The log history server starts with the 18080 port number by default, so you can access it by accessing the relevant address + port number. For example, if I configured and started the history server on the hadoop01 node, it can be accessed by the following address:
Hadoop01:18080
What you can see after the visit is the same as the Web UI in standalone mode. For more detailed configuration instructions, please refer to the official documentation:
Http://spark.apache.org/docs/1.6.2/monitoring.html#viewing-after-the-fact
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.