Start Spark History Server UI
This code snippet provides the simple CLI to start Spark history server service.
About Spark History Server
Spark History Server can be used to look for historical Spark jobs that completed successfully or failed. By default, Spark execution logs are saved into local temporary folders. You can add configuration items into spark-default.xml to save logs to HDFS. For example, the following configurations ensure the logs are stored into my local Hadoop environment.
spark.eventLog.enabled true spark.eventLog.dir hdfs://localhost:9000/shared/spark-logs spark.history.fs.logDirectory hdfs://localhost:9000/shared/spark-logs
In the code snippet,
SPARK_HOME is the environment variable name that points to the location where you Spark is installed. If this variable is not defined, you can directly specify the full path to the shell script (sbin/start-history-server.sh).
History Server URL
By default, the URL is http://localhost:18080/ in local environment. You can replace localhost with your server address where the history server is started. Usually it locates in edge servers.
The UI looks like the following screenshot:
By clicking the link of each App, you will be able to find the job details for each Spark applications.
$SPARK_HOME/sbin/start-history-server.sh # If no SPARK_HOME variable /path/to/spark/sbin/start-history-server.sh