languageEnglish

Write and read parquet files in Python / Spark

visibility 8,836 comment 0 access_time 2 years ago

Parquet is columnar store format published by Apache. It's commonly used in Hadoop ecosystem. There are many programming language APIs that have been implemented to support writing and reading parquet files. 

You can also use PySpark to read or write parquet files.

Code snippet

from pyspark.sql import SparkSession

appName = "Scala Parquet Example"
master = "local"

spark = SparkSession.builder.appName(appName).master(master).getOrCreate()

df = spark.read.format("csv").option("header", "true").load("Sales.csv")

df.write.parquet("Sales.parquet")

df2 = spark.read.parquet("Sales.parquet")
df2.show()
info Last modified by Administrator 2 years ago copyright This page is subject to Site terms.

Subscribe newsletter

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

More from Kontext

visibility 2349
thumb_up 0
access_time 2 years ago
visibility 2586
thumb_up 0
access_time 3 years ago