Apache Spark 2.4.3 Installation on Windows 10 using Windows Subsystem for Linux

access_time 2 years ago visibility7934 comment 4

This pages summarizes the steps to install the latest version 2.4.3 of Apache Spark on Windows 10 via Windows Subsystem for Linux (WSL).

Prerequisites

Follow either of the following pages to install WSL in a system or non-system drive on your Windows 10.

I also recommend you to install Hadoop 3.2.0 on your WSL following the second page.

After the above installation, your WSL should already have OpenJDK 1.8 installed.

Now let’s start to install Apache Spark 2.4.3 in WSL.

Download binary package

Visit Downloads page on Spark website to find the download URL.

image

For me, the closest location is: http://apache.mirror.serversaustralia.com.au/spark/spark-2.4.3/spark-2.4.3-bin-hadoop2.7.tgz.

Download the binary package using the following command:

wget http://apache.mirror.serversaustralia.com.au/spark/spark-2.4.3/spark-2.4.3-bin-hadoop2.7.tgz

Unzip the binary package

Unpack the package using the following command:

tar -xvzf spark-2.4.3-bin-hadoop2.7.tgz -C ~/hadoop

Setup environment variables

Setup SPARK_HOME environment variables and also add the bin subfolder into PATH variable.

Run the following command to change .bashrc file:

vi ~/.bashrc

Add the following lines to the end of the file:

export SPARK_HOME=~/hadoop/spark-2.4.3-bin-hadoop2.7                                                                   
export PATH=$SPARK_HOME/bin:$PATH
Source the modified file to make it effective:
source  ~/.bashrc

Now we have setup Spark correctly.

Let’s do some testings.

Run Spark interactive shell

Run the following command to start Spark shell:

spark-shell

The interface looks like the following screenshot:

image

The master is set as local[*].

Run built-in examples

Run Spark Pi example via the following command:

run-example SparkPi 10

In this website, I’ve provided many Spark examples. You can practice following those guides.

Enable Hive support

If you’ve configured Hive in WSL, follow the steps below to enable Hive support in Spark.

Copy the Hadoop core-site.xml and hdfs-site.xml and Hive hive-site.xml configuration files into Spark configuration folder:

cp $HADOOP_HOME/etc/hadoop/core-site.xml $SPARK_HOME/conf/
cp $HADOOP_HOME/etc/hadoop/hdfs-site.xml $SPARK_HOME/conf/
cp $HIVE_HOME/conf/hive-site.xml $SPARK_HOME/conf/

And then you can run Spark with Hive support (enableHiveSupport function):

from pyspark.sql import SparkSession
appName = "PySpark Hive Example" master = "local[*]" spark = SparkSession.builder \
             .appName(appName) \
             .master(master) \
             .enableHiveSupport() \
             .getOrCreate()
# Read data using Spark df = spark.sql("show databases") df.show()

For more details, please refer to this page: Read Data from Hive in Spark 1.x and 2.x.

Spark default configurations

Run the following command to create a spark default config file using the template:

cp spark-defaults.conf.template spark-defaults.conf

Update the config file with default Spark configurations. These configurations will be added when Spark jobs are submitted.

In my following configuration, I added event log directory and also Spark history log directory. 

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.

# Example:
# spark.master                     spark://master:7077
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://localhost:19000/spark-event-logs
spark.history.fs.logDirectory    hdfs://localhost:19000/spark-event-logs
# spark.serializer                 org.apache.spark.serializer.KryoSerializer
# spark.driver.memory              5g
# spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"

Spark history server

Run the following command to start Spark history server:

$SPARK_HOME/sbin/start-history-server.sh

Open the history server UI (by default: http://localhost:18080/) in browser, you should be able to view all the jobs submitted. 

spark.eventLog.dir and spark.history.fs.logDirectory

These two configurations can be the same or different. The first configuration is used to write event logs when Spark application runs while the second directory is used by the historical server to read event logs. 

Have fun with Spark in WSL!

info Last modified by Administrator at 4 months ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.


Learn more arrow_forward

More from Kontext

local_offer linux local_offer WSL local_offer ubuntu local_offer big-data-on-wsl

visibility 12751
thumb_up 5
access_time 2 years ago

This page shows how to install Windows Subsystem for Linux (WSL) system on a non-system drive manually. Open PowerShell as Administrator and run the following command to enable WSL feature: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux Run the following ...

local_offer hadoop local_offer linux local_offer WSL local_offer big-data-on-wsl

visibility 19741
thumb_up 11
access_time 2 years ago

In my previous post , I showed how to configure a single node Hadoop instance on Windows 10. The steps are not too difficult to follow if you have Java programming background. However there is one step that is not very straightforward: native Hadoop executable (winutils.exe) is not included in the ...

local_offer zeppelin local_offer spark local_offer hadoop local_offer linux local_offer sqoop local_offer hive local_offer WSL

visibility 1288
thumb_up 0
access_time 2 years ago

This page summarizes the installation guides about big data tools on Windows through Windows Subsystem for Linux (WSL). Install Hadoop 3.2.0 on Windows 10 using Windows Subsystem for Linux (WSL) A framework that allows for distributed processing of the large data sets ...

About column

Apache Spark installation guides, performance tuning tips, general tutorials, etc.

*Spark logo is a registered trademark of Apache Spark.

rss_feed Subscribe RSS