In Spark, it’s easy to convert Spark Dataframe to Pandas dataframe through one line of code:

df_pd = df.toPandas()

In this page, I am going to show you how to convert a list of PySpark row objects to a Pandas data frame.

Prepare the data frame

The following code snippets create a data frame with schema as:

root
  |-- Category: string (nullable = false)
  |-- ItemID: integer (nullable = false)
  |-- Amount: decimal(10,2) (nullable = true)

from pyspark.sql import SparkSession

from pyspark.sql.functions import collect_list,struct
from pyspark.sql.types import ArrayType, StructField, StructType, StringType, IntegerType, DecimalType
from decimal import Decimal
import pandas as pd

appName = "Python Example - PySpark Row List to Pandas Data Frame"
master = "local"

# Create Spark session
spark = SparkSession.builder \
    .appName(appName) \
    .master(master) \
    .getOrCreate()

# List
data = [('Category A', 1, Decimal(12.40)),
        ('Category B', 2, Decimal(30.10)),
        ('Category C', 3, Decimal(100.01)),
        ('Category A', 4, Decimal(110.01)),
        ('Category B', 5, Decimal(70.85))
        ]

# Create a schema for the dataframe
schema = StructType([
    StructField('Category', StringType(), False),
    StructField('ItemID', IntegerType(), False),
    StructField('Amount', DecimalType(scale=2), True)
])

# Convert list to RDD
rdd = spark.sparkContext.parallelize(data)

# Create data frame
df = spark.createDataFrame(rdd, schema)
df.printSchema()
df.show()
df_pd = df.toPandas()
df_pd.info()

The above code convert  a list to Spark data frame first and then convert it to a Pandas data frame.

The information of the Pandas data frame looks like the following:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5 entries, 0 to 4
Data columns (total 3 columns):
Category    5 non-null object
ItemID      5 non-null int32
Amount      5 non-null object
dtypes: int32(1), object(2)
memory usage: 172.0+ bytes

Aggregate the data frame

It’s very common to do aggregations in Spark. For example, the following code snippet groups the above Spark data frame by category attribute.

# Aggregate but still keep all the raw attributes
df_agg = df.groupby("Category").agg(collect_list(struct("*")).alias('Items'))
df_agg.printSchema()

The schema of the new Spark data frame have two attributes: Category and Items.

root
  |-- Category: string (nullable = false)
  |-- Items: array (nullable = true)
  |    |-- element: struct (containsNull = true)
  |    |    |-- Category: string (nullable = false)
  |    |    |-- ItemID: integer (nullable = false)
  |    |    |-- Amount: decimal(10,2) (nullable = true)

The Items attribute is an array or list of pyspark.sql.Row object.

Convert pyspark.sql.Row list to Pandas data frame

Now we can convert the Items attribute using foreach function.

def to_pandas(row):
    print('Create a pandas data frame for category: ' + row["Category"])
    items = [item.asDict() for item in row["Items"]]
    df_pd_items = pd.DataFrame(items)
    print(df_pd_items)

# Convert Items for each Category to a pandas dataframe
df_agg.foreach(to_pandas)

In the above code snippet, Row list is converted to as dictionary list first and then the list is converted to pandas data frame using pd.DateFrame function. As the list element is dictionary object which has keys, we don’t need to specify columns argument for pd.DataFrame function.

info Last modified by Raymond at 10 months ago * This page is subject to Site terms.

More from Kontext

local_offer teradata local_offer python

visibility 210
thumb_up 0
access_time 2 months ago

Pandas is commonly used by Python users to perform data operations. In many scenarios, the results need to be saved to a storage like Teradata. This article shows you how to do that easily using JayDeBeApi or  ...

open_in_new View open_in_new Spark + PySpark

local_offer python

visibility 62
thumb_up 0
access_time 2 months ago

CSV is a common data format used in many applications. It's also a common task for data workers to read and parse CSV and then save it into another storage such as RDBMS (Teradata, SQL Server, MySQL). In my previous article  ...

open_in_new View open_in_new Python Programming

local_offer teradata local_offer python local_offer Java

visibility 135
thumb_up 0
access_time 2 months ago

Python JayDeBeApi module allows you to connect from Python to Teradata databases using Java JDBC drivers. In article Connect to Teradata database through Python , I showed ho...

open_in_new View open_in_new Python Programming

local_offer pandas local_offer sqlite

visibility 39
thumb_up 0
access_time 2 months ago

In my previous posts, I showed how to use  jaydebeapi or sqlite3 pack...

open_in_new View open_in_new Python Programming

info About author

Dark theme mode

Dark theme mode is available on Kontext.

Learn more arrow_forward

Kontext Column

Created for everyone to publish data, programming and cloud related articles. Follow three steps to create your columns.


Learn more arrow_forward