The page summarizes the steps required to run and debug PySpark (Spark for Python) in Visual Studio Code.

Install Python and pip

Install Python from the official website:

https://www.python.org/downloads/.

The version I am using is 3.6.4 32-bit. Pip is shipped together in this version.

Install Spark standalone edition

Download Spark 2.3.3 from the following page:

https://www.apache.org/dyn/closer.lua/spark/spark-2.3.3/spark-2.3.3-bin-hadoop2.7.tgz

If you don’t know how to install, please follow the following page:

Install Spark 2.2.1 in Windows

*Remember to change the package to version 2.3.3.

There is one bug with the latest Spark version 2.4.0 and thus I am using 2.3.3.

Install pyspark package

Since Spark version is 2.3.3, we need to install the same version for pyspark via the following command:

pip install pyspark==2.3.3

The version needs to be consistent otherwise you may encounter errors for package py4j.

Run PySpark code in Visual Studio Code

You can run PySpark through context menu item Run Python File in Terminal.

image

Alternatively, you can also debug your application in VS Code too as shown in the following screenshot:

image

Run Azure HDInsights PySpark code

You can install extension Azure HDInsight Tools to submit spark jobs in VS Code to your HDInsights cluster.

For more details, refer to the extension page:

https://marketplace.visualstudio.com/items?itemName=mshdinsight.azure-hdinsight

info Last modified by Raymond at 7 months ago * This page is subject to Site terms.

More from Kontext

Entity Framework Core Code-First - Generate Covering Index with Columns Included

local_offer entity-framework local_offer asp.net core local_offer Azure local_offer C#

visibility 12
thumb_up 0
access_time 5 days ago

In SQL Server or some other relational databases, it is a very common requirement to create covering index with columns included in index pages beside the index key columns. With Entity Framework Core, you can also easily generate covering indexes using purely C# code. Scenario For ...

open_in_new ASP.NET Core

local_offer teradata local_offer python

visibility 560
thumb_up 1
access_time 3 months ago

Pandas is commonly used by Python users to perform data operations. In many scenarios, the results need to be saved to a storage like Teradata. This article shows you how to do that easily using JayDeBeApi or  ...

open_in_new Spark + PySpark

local_offer python

visibility 150
thumb_up 0
access_time 2 months ago

CSV is a common data format used in many applications. It's also a common task for data workers to read and parse CSV and then save it into another storage such as RDBMS (Teradata, SQL Server, MySQL). In my previous article  ...

open_in_new Python Programming

local_offer teradata local_offer python local_offer Java

visibility 309
thumb_up 0
access_time 3 months ago

Python JayDeBeApi module allows you to connect from Python to Teradata databases using Java JDBC drivers. In article Connect to Teradata database through Python , I showed ho...

open_in_new Python Programming

info About author

comment Comments (0)

comment Add comment

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

No comments yet.

Dark theme mode

Dark theme mode is available on Kontext.

Learn more arrow_forward

Kontext Column

Created for everyone to publish data, programming and cloud related articles. Follow three steps to create your columns.


Learn more arrow_forward