By using this site, you acknowledge that you have read and understand our Cookie policy, Privacy policy and Terms .

In this page, I’m going to demonstrate how to write and read parquet files in Spark/Scala by using Spark SQLContext class.

Reference

What is parquet format?

Go the following project site to understand more about parquet.

https://parquet.apache.org/

Prerequisites

Spark

If you have not installed Spark, follow this page to setup:

Install Big Data Tools (Spark, Zeppelin, Hadoop) in Windows for Learning and Practice

Hadoop (Optional)

In this example, I am going to read CSV files in HDFS. You can setup your local Hadoop instance via the same above link.

Alternatively, you can change the file path to a local file.

IntelliJ IDEA

I am using IntelliJ to write the Scala script. You can also use Scala shell to test instead of using IDE. Scala SDK is also required. In my case, I am using the Scala SDK distributed as part of my Spark.

JDK

JDK is required to run Scala in JVM.

Read and Write parquet files

In this example, I am using Spark SQLContext object to read and write parquet files.

Code

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{DataFrame, SQLContext}
object ParquetTest {
def main(args: Array[String]) = {
// Two threads local[2]
val conf: SparkConf = new SparkConf().setMaster("local[2]").setAppName("ParquetTest")
val sc: SparkContext = new SparkContext(conf)
val sqlContext: SQLContext = new SQLContext(sc)
writeParquet(sc, sqlContext)
readParquet(sqlContext)
}
def writeParquet(sc: SparkContext, sqlContext: SQLContext) = {
// Read file as RDD
val rdd = sqlContext.read.format("csv").option("header", "true").load("hdfs://0.0.0.0:19000/Sales.csv")
// Convert rdd to data frame using toDF; the following import is required to use toDF function.
val df: DataFrame = rdd.toDF()
// Write file to parquet
df.write.parquet("Sales.parquet")
}
def readParquet(sqlContext: SQLContext) = {
// read back parquet to DF
val newDataDF = sqlContext.read.parquet("Sales.parquet")
// show contents
newDataDF.show()
}
}

Before you run the code

Make sure IntelliJ project has all the required SDKs and libraries setup. In my case

  • JDK is using 1.8 JDK installed in my C drive.
  • Scala SDK: version 2.11.8 as part of my Spark installation (spark-2.2.1-bin-hadoop2.7)
  • Jars: all libraries in my Spark jar folder (for Spark libraries used in the sample code).

image

Run the code in IntelliJ

The following is the screenshot for the output:

image

What was created?

In the example code, a local folder Sales.parquet is created:

image

Run the code in Zeppelin

You can also run the same code in Zeppelin. If you don’t have a Zeppelin instance to play with, you can follow the same link in the Prerequisites section to setup.

info Last modified by Raymond at 3 months ago * This page is subject to Site terms.

More from Kontext

Schema Merging (Evolution) with Parquet in Spark and Hive

local_offer parquet local_offer pyspark local_offer spark-2-x local_offer hive local_offer hdfs

visibility 246
thumb_up 0
access_time 3 months ago

Schema evolution is supported by many frameworks or data serialization systems such as Avro, Orc, Protocol Buffer and Parquet. With schema evolution, one set of data can be stored in multiple files with different but compatible schema. In Spark, Parquet data source can detect and merge sch...

open_in_new View

Improve PySpark Performance using Pandas UDF with Apache Arrow

local_offer pyspark local_offer spark local_offer spark-2-x local_offer pandas

visibility 417
thumb_up 4
access_time 4 months ago

Apache Arrow is an in-memory columnar data format that can be used in Spark to efficiently transfer data between JVM and Python processes. This currently is most beneficial to Python users that work with Pandas/NumPy data. In this article, ...

open_in_new View

local_offer pyspark local_offer spark-2-x local_offer spark

visibility 78
thumb_up 0
access_time 4 months ago

This article shows you how to read and write XML files in Spark. Sample XML file Create a sample XML file named test.xml with the following content: <?xml version="1.0"?> <data> <record id="1"> <rid>1</rid> <nam...

open_in_new View

local_offer pyspark local_offer spark-2-x local_offer spark local_offer python

visibility 290
thumb_up 0
access_time 4 months ago

This article shows how to convert a Python dictionary list to a DataFrame in Spark using Python. Example dictionary list data = [{"Category": 'Category A', "ID": 1, "Value": 12.40}, {"Category": 'Category B', "ID": 2, "Value": 30.10}, {"Category": 'Category C', "...

open_in_new View

info About author

comment Comments (2)

comment Add comment

Please log in or register to comment. account_circle Log in person_add Register
R
Raymondarrow_drop_down

@Ansh

Yes, you can.

For example, the following code is used to read parquet files from a Hadoop cluster.

def readParquet(sqlContext: SQLContext) = {
// read back parquet to DF
val newDataDF = sqlContext.read.parquet("hdfs://hdp-master:19000/user/hadoop/sqoop_test/blogs")
// show contents
newDataDF.show()
}

The cluster was setup by following this post:

Configure Hadoop 3.1.0 in a Multi Node Cluster

Of source the hdp-master:19000 needs to be accessible from the server that running the Spark/Scala code.

At the moment, my HDFS is set as readable for all servers/users in the LAN. In a production environment, you may need to manage the permissions too.

Furthermore, you can also run Spark apps in a Spark Cluster instead of in stand-alone or local machine.  I will cover more about this in my future post.


format_quote

person Ansh access_time 2 years ago
Re: Write and Read Parquet Files in Spark/Scala

Can we connect and read remotely located HDFS Parquet file? by using above code

reply Reply
account_circle Ansh

Can we connect and read remotely located HDFS Parquet file? by using above code


reply Reply
Kontext Column

Kontext Column

Created for everyone to publish data, programming and cloud related articles. Follow three steps to create your columns.

Learn more arrow_forward
info Follow us on Twitter to get the latest article updates. Follow us