Turn off INFO logs in Spark

access_time 4 months ago visibility664 comment 0

Spark is a robust framework with logging implemented in all modules. Sometimes it might get too verbose to show all the INFO logs. This article shows you how to hide those INFO logs in the console output.

Spark logging level

Log level can be setup using function pyspark.SparkContext.setLogLevel.

The definition of this function is available here:

def setLogLevel(self, logLevel):
        """
        Control our logLevel. This overrides any user-defined log settings.
        Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
        """
        self._jsc.setLogLevel(logLevel)

Set log level to WARN

The following code sets the log level to WARN

from pyspark.sql import SparkSession

appName = "Spark - Setting Log Level"
master = "local"

# Create Spark session
spark = SparkSession.builder \
    .appName(appName) \
    .master(master) \
    .getOrCreate()

spark.sparkContext.setLogLevel("WARN")

When running the script with some actions, the console still prints out INFO logs before setLogLevel function is called. 

Change Spark logging config file

Follow these steps to configure system level logging (need access to Spark conf folder):

  1. Navigate to Spark home folder.
  2. Go to sub folder conf for all configuration files. 
  3. Create log4j.properties file from template file  log4j.properties.template.
  4. Edit file log4j.properties to change default logging to WARN:

Run the application again and the output is very clean as the following screenshot shows:


For Scala

The above system level Spark configuration will apply to all programming languages supported by Spark incl. Scala. 

If you want to change log type via programming way, try the following code in Scala:

spark = SparkSession.builder.getOrCreate()
spark.sparkContext.setLogLevel("WARN")

If you use Spark shell, you can directly access SparkContext via sc:

sc.setLogLevel("WARN")


Run Spark code

You can easily run Spark code on your Windows or UNIX-alike (Linux, MacOS) systems. Follow these articles to setup your Spark environment if you don't have one yet:

info Last modified by Raymond at 4 months ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Want to publish your article on Kontext?

Learn more

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.


Learn more arrow_forward

More from Kontext

local_offer zeppelin local_offer spark local_offer hadoop local_offer linux local_offer sqoop local_offer hive local_offer WSL

visibility 1298
thumb_up 0
access_time 2 years ago

This page summarizes the installation guides about big data tools on Windows through Windows Subsystem for Linux (WSL). Install Hadoop 3.2.0 on Windows 10 using Windows Subsystem for Linux (WSL) A framework that allows for distributed processing of the large data sets ...

local_offer zeppelin local_offer spark local_offer big-data-on-windows-10

visibility 5823
thumb_up 0
access_time 3 years ago

This post summarizes the steps to install Zeppelin 0.7.3 in Windows environment. GIT Bash Command Prompt Windows 10 Download the latest binary package from the following website: http://zeppelin.apache.org/download.html In my case, I am saving the file to folder: F:\DataAnalytics Open ...

local_offer spark local_offer hdfs local_offer scala local_offer parquet local_offer spark-file-operations

visibility 16232
thumb_up 0
access_time 3 years ago

In my previous post, I demonstrated how to write and read parquet files in Spark/Scala. The parquet file destination is a local folder. Write and Read Parquet Files in Spark/Scala In this page, I am going to demonstrate how to write and read parquet files in HDFS. import ...

About column

Apache Spark installation guides, performance tuning tips, general tutorials, etc.

*Spark logo is a registered trademark of Apache Spark.

rss_feed Subscribe RSS