In Spark, SparkContext.parallelize function can be used to convert Python list to RDD and then RDD can be converted to DataFrame object. The following sample code is based on Spark 2.x.

In this page, I am going to show you how to convert the following list to a data frame:

data = [('Category A', 100, "This is category A"),
        ('Category B', 120, "This is category B"),
        ('Category C', 150, "This is category C")]

Import types

First, let’s import the data types we need for the data frame.

from pyspark.sql import SparkSession
from pyspark.sql.types import StructField, StructType, StringType, IntegerType

We imported StringType and IntegerType because the sample data have three attributes, two are strings and one is integer.

Create Spark session

Create Spark session using the following code:

from pyspark.sql import SparkSession
from pyspark.sql.types import ArrayType, StructField, StructType, StringType, IntegerType

appName = "PySpark Example - Python Array/List to Spark Data Frame"
master = "local"

# Create Spark session
spark = SparkSession.builder \
    .appName(appName) \
    .master(master) \
    .getOrCreate()

Define the schema

Let’s now define a schema for the data frame based on the structure of the Python list.

# Create a schema for the dataframe
schema = StructType([
    StructField('Category', StringType(), True),
    StructField('Count', IntegerType(), True),
    StructField('Description', StringType(), True)
])

Convert the list to data frame

The list can be converted to RDD through parallelize function:

# Convert list to RDD
rdd = spark.sparkContext.parallelize(data)

# Create data frame
df = spark.createDataFrame(rdd,schema)
print(df.schema)
df.show()

Complete script

from pyspark.sql import SparkSession
from pyspark.sql.types import ArrayType, StructField, StructType, StringType, IntegerType

appName = "PySpark Example - Python Array/List to Spark Data Frame"
master = "local"

# Create Spark session
spark = SparkSession.builder \
    .appName(appName) \
    .master(master) \
    .getOrCreate()

# List
data = [('Category A', 100, "This is category A"),
        ('Category B', 120, "This is category B"),
        ('Category C', 150, "This is category C")]

# Create a schema for the dataframe
schema = StructType([
    StructField('Category', StringType(), True),
    StructField('Count', IntegerType(), True),
    StructField('Description', StringType(), True)
])

# Convert list to RDD
rdd = spark.sparkContext.parallelize(data)

# Create data frame
df = spark.createDataFrame(rdd,schema)
print(df.schema)
df.show()

Sample output

StructType(List(StructField(Category,StringType,true),StructField(Count,IntegerType,true),StructField(Description,StringType,true)))
+----------+-----+------------------+
|  Category|Count|       Description|
+----------+-----+------------------+
|Category A|  100|This is category A|
|Category B|  120|This is category B|
|Category C|  150|This is category C|
+----------+-----+------------------+

Summary

For Python objects, we can convert them to RDD first and then use SparkSession.createDataFrame function to create the data frame based on the RDD.

The following data types are supported for defining the schema:

  • NullType
  • StringType
  • BinaryType
  • BooleanType
  • DateType
  • TimestampType
  • DecimalType
  • DoubleType
  • FloatType
  • ByteType
  • IntegerType
  • LongType
  • ShortType
  • ArrayType
  • MapType

For more information, please refer to the official API documentation pyspark.sql module.


info Last modified by Raymond at 5 months ago copyright This page is subject to Site terms.

More from Kontext

local_offer spark local_offer how-to

visibility 2
thumb_up 0
access_time 2 hours ago

Spark is a robust framework with logging implemented in all modules. Sometimes it might get too verbose to show all the INFO logs. This article shows you how to hide those INFO logs in the console output. Spark logging level Log level can be setup using function pyspark.Spar...

open_in_new Spark

local_offer pyspark local_offer spark local_offer how-to

visibility 2
thumb_up 0
access_time 2 hours ago

This article shows how to change column types of Spark DataFrame using Python. For example, convert StringType to DoubleType, StringType to Integer, StringType to DateType. Construct a dataframe  Follow article  ...

open_in_new Spark

local_offer pyspark local_offer spark local_offer how-to

visibility 2
thumb_up 0
access_time 2 hours ago

This article shows how to add a constant or literal column to Spark data frame using Python.  Construct a dataframe  Follow article  Convert Python Dicti...

open_in_new Spark

local_offer pyspark local_offer spark local_offer how-to

visibility 2
thumb_up 0
access_time 3 hours ago

This article shows how to 'delete' column from Spark data frame using Python.  Construct a dataframe  Follow article  Convert Python Dictionary List to P...

open_in_new Spark

comment Comments (0)

comment Add comment

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

No comments yet.

Kontext Column

Created for everyone to publish data, programming and cloud related articles. Follow three steps to create your columns.


Learn more arrow_forward