Spark submit --num-executors --executor-cores --executor-memory
insights Stats
Apache Spark installation guides, performance tuning tips, general tutorials, etc.
*Spark logo is a registered trademark of Apache Spark.
spark-submit
) can be used to run your Spark applications in a target environment (standalone, YARN, Kubernetes, Mesos). There are three commonly used arguments: --num-executors --executor-cores --executor-memory
.
--num-executors
This argument only works on YARN and Kubernetes only. The value indicates the number of executors to launch. By default, the value is 2. If dynamic allocation is enabled (spark.dynamicAllocation.enabled
= true), this number will become the minimum initial number of executors.
--executor-cores
This argument only works on Spark standalone, YARN and Kubernetes only. The value indicates the number of cores used by each executor. The default is 1 in YARN and K8S modes, or all available cores on the worker in standalone mode.
--executor-memory
This argument represents the memory per executor (e.g. 1000M, 2G, 3T). The default value is 1G.
The actual allocated memory is decided based on the following formula (from 2.3.0):
spark.executor.memoryOverhead + spark.executor.memory + spark.memory.offHeap.size + spark.executor.pyspark.memory
By default, spark.executor.memoryOverhead
is calculated by: executorMemory * 0.10, with minimum of 384. spark.executor.pyspark.memory
by default is not set.
Setup these arguments dynamically
You can setup the above arguments dynamically when setting up Spark session.
The following code snippet provide an example about how to do that.
PySpark
conf = SparkConf() \ .setMaster("yarn") \ .setAppName("Kontext") \ .set("spark.executor.memory", "5g") \ .set("spark.executor.cores", 4) \ .set("spark.executor.instances", 25) spark = SparkSession.builder.config(conf).getOrCreate()
Scala
val conf = new SparkConf() .setMaster("yarn") .setAppName("Kontext") .set("spark.executor.memory", "5g") .set("spark.executor.cores", 4) .set("spark.executor.instances", 25) val spark = SparkSession.builder.config(conf).getOrCreate()
References
Configuration - Spark 3.2.1 Documentation (apache.org)
spark-submit command reference
spark-submit --help Usage: spark-submit [options] <app jar | python file | R file> [app arguments] Usage: spark-submit --kill [submission ID] --master [spark://...] Usage: spark-submit --status [submission ID] --master [spark://...] Usage: spark-submit run-example [options] example-class [example args] Options: --master MASTER_URL spark://host:port, mesos://host:port, yarn, k8s://https://host:port, or local (Default: local[*]). --deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or on one of the worker machines inside the cluster ("cluster") (Default: client). --class CLASS_NAME Your application's main class (for Java / Scala apps). --name NAME A name of your application. --jars JARS Comma-separated list of jars to include on the driver and executor classpaths. --packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version. --exclude-packages Comma-separated list of groupId:artifactId, to exclude while resolving the dependencies provided in --packages to avoid dependency conflicts. --repositories Comma-separated list of additional remote repositories to search for the maven coordinates given with --packages. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files FILES Comma-separated list of files to be placed in the working directory of each executor. File paths of these files in executors can be accessed via SparkFiles.get(fileName). --conf, -c PROP=VALUE Arbitrary Spark configuration property. --properties-file FILE Path to a file from which to load extra properties. If not specified, this will look for conf/spark-defaults.conf. --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M). --driver-java-options Extra Java options to pass to the driver. --driver-library-path Extra library path entries to pass to the driver. --driver-class-path Extra class path entries to pass to the driver. Note that jars added with --jars are automatically included in the classpath. --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). --proxy-user NAME User to impersonate when submitting the application. This argument does not work with --principal / --keytab. --help, -h Show this help message and exit. --verbose, -v Print additional debug output. --version, Print the version of current Spark. Cluster deploy mode only: --driver-cores NUM Number of cores used by the driver, only in cluster mode (Default: 1). Spark standalone or Mesos with cluster deploy mode only: --supervise If given, restarts the driver on failure. Spark standalone, Mesos or K8s with cluster deploy mode only: --kill SUBMISSION_ID If given, kills the driver specified. --status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone, Mesos and Kubernetes only: --total-executor-cores NUM Total cores for all executors. Spark standalone, YARN and Kubernetes only: --executor-cores NUM Number of cores used by each executor. (Default: 1 in YARN and K8S modes, or all available cores on the worker in standalone mode). Spark on YARN and Kubernetes only: --num-executors NUM Number of executors to launch (Default: 2). If dynamic allocation is enabled, the initial number of executors will be at least NUM. --principal PRINCIPAL Principal to be used to login to KDC. --keytab KEYTAB The full path to the file that contains the keytab for the principal specified above. Spark on YARN only: --queue QUEUE_NAME The YARN queue to submit to (Default: "default"). --archives ARCHIVES Comma separated list of archives to be extracted into the working directory of each executor.