PySpark Read Multiple Lines Records from CSV

access_time 8 months ago visibility1713 comment 0

CSV is a common format used when extracting and exchanging data between systems and platforms. Once CSV file is ingested into HDFS, you can easily read them as DataFrame in Spark. However there are a few options you need to pay attention to especially if you source file:

  • Has records across multiple lines.
  • Has escaped characters in the field.
  • Fields contain delimiters.

This page shows you how to handle the above scenarios in Spark by using Python as programming language. If you prefer Scala or other Spark compatible languages, the APIs are very similar. 

Sample data file

The CSV file content looks like the followng:

ID,Text1,Text2
1,Record 1,Hello World!
2,Record 2,Hello Hadoop!
3,Record 3,"Hello 
Kontext!"
4,Record 4,Hello!
For the third record, field Text2 is across two lines.
The file is ingested into my Hadoop instance with location as:
hadoop fs -copyFromLocal data.csv /data.csv

Normal CSV file read

Let's create a python script using the following code:

from pyspark.sql import SparkSession

appName = "Python Example - PySpark Read CSV"
master = 'local'

# Create Spark session
spark = SparkSession.builder \
    .master(master) \
    .appName(appName) \
    .getOrCreate()

# Convert list to data frame
df = spark.read.format('csv') \
                .option('header',True) \
                .option('sep', ',') \
                .load('/data.csv')
df.show()

In the above code snippet, we used 'read' API with CSV as the format and specified the following options:

  • header = True: this means there is a header line in the data file.
  • sep=, : comma is the delimiter/separator. Since our file is using comma, we don't need to specify this as by default is is comma.

The output looks like the following:

+---------+--------+-------------+
|       ID|   Text1|        Text2|
+---------+--------+-------------+
|        1|Record 1| Hello World!|
|        2|Record 2|Hello Hadoop!|
|        3|Record 3|       Hello |
|Kontext!"|    null|         null|
|        4|Record 4|       Hello!|
+---------+--------+-------------+

This isn't what we are looking for as it doesn't parse the multiple lines record correct.

Read multiple line records

It's very easy to read multiple line records CSV in spark and we just need to specify multiLine option as True.

from pyspark.sql import SparkSession

appName = "Python Example - PySpark Read CSV"
master = 'local'

# Create Spark session
spark = SparkSession.builder \
    .master(master) \
    .appName(appName) \
    .getOrCreate()

# Convert list to data frame
df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .load('/data.csv')
df.show()
print(f'Record count is: {df.count()}')

The output looks like the following:

+---+--------+---------------+
| ID|   Text1|          Text2|
+---+--------+---------------+
|  1|Record 1|   Hello World!|
|  2|Record 2|  Hello Hadoop!|
|  3|Record 3|Hello
Kontext!|
|  4|Record 4|         Hello!|
+---+--------+---------------+
Record count is: 4

Different quote character 

Let's imagine the data file content looks like the following (double quote is replaced with @):

ID,Text1,Text2
1,Record 1,Hello World!
2,Record 2,Hello Hadoop!
3,Record 3,@Hello 
Kontext!@
4,Record 4,Hello!
Even we specify multiLine option, our previous script still read it as 5 records.
To fix this, we can simply specify another very useful option 'quote':
from pyspark.sql import SparkSession

appName = "Python Example - PySpark Read CSV"
master = 'local'

# Create Spark session
spark = SparkSession.builder \
    .master(master) \
    .appName(appName) \
    .getOrCreate()

# Convert list to data frame
df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .option('quote','@') \
                .load('/data.csv')
df.show()
print(f'Record count is: {df.count()}')

The output looks like the following:

+---+--------+---------------+
| ID|   Text1|          Text2|
+---+--------+---------------+
|  1|Record 1|   Hello World!|
|  2|Record 2|  Hello Hadoop!|
|  3|Record 3|Hello
Kontext!|
|  4|Record 4|         Hello!|
+---+--------+---------------+

Escape double quotes

Another common used option is the escape character.

Let's assume your CSV content looks like the following:

ID,Text1,Text2
1,Record 1,Hello World!
2,Record 2,Hello Hadoop!
3,Record 3,"Hello 
""Kontext""!"
4,Record 4,Hello!

Let's change the read function to use the default quote character '"':

df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .option('quote','"') \
                .load('/data.csv')

It doesn't read the content properly though the record count is correct:

+---+--------+--------------------+
| ID|   Text1|               Text2|
+---+--------+--------------------+
|  1|Record 1|        Hello World!|
|  2|Record 2|       Hello Hadoop!|
|  3|Record 3|"Hello
""Kontext...|
|  4|Record 4|              Hello!|
+---+--------+--------------------+

To fix this, we can just specify the escape option:

df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .option('quote','"') \
                .option('escape','"') \
                .load('/data.csv')

It will output the correct format we are looking for:

+---+--------+-----------------+
| ID|   Text1|            Text2|
+---+--------+-----------------+
|  1|Record 1|     Hello World!|
|  2|Record 2|    Hello Hadoop!|
|  3|Record 3|Hello
"Kontext"!|
|  4|Record 4|           Hello!|
+---+--------+-----------------+

If you escape character is different, you can also specify it accordingly.

Multiple character quotes

If your attributes are quoted using multiple characters in CSV,  unfortunately this CSV ser/deser doesn't support that.

For example, let's assume the field is quoted with double double quotes:

ID,Text1,Text2
1,Record 1,Hello World!
2,Record 2,Hello Hadoop!
3,Record 3,""Hello 
Kontext!""
4,Record 4,Hello!

We will encounter one error if we use the following code to read it:

df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .option('quote','""') \
                .option('escape','"') \
                .load('/data.csv')

Error:

java.lang.RuntimeException: quote cannot be more than one character

Similarly, for escape character, it only supports one character.

To resolve these problems, you need to implement your own text file deserializer. 

info Last modified by Administrator at 3 months ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Want to publish your article on Kontext?

Learn more

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.


Learn more arrow_forward

More from Kontext

local_offer spark local_offer pyspark local_offer partitioning local_offer spark-advanced

visibility 7087
thumb_up 3
access_time 2 years ago

In my previous post about Data Partitioning in Spark (PySpark) In-depth Walkthrough , I mentioned how to repartition data frames in Spark using repartition or coalesce functions. In this post, I am going to explain how Spark partition data using partitioning functions. Partitioner class is ...

local_offer python local_offer spark

visibility 996
thumb_up 0
access_time 12 months ago

This code snippet shows how to convert string to date.

local_offer python

visibility 37
thumb_up 1
access_time 12 months ago

Different programming languages have different package management tools.

About column

Apache Spark installation guides, performance tuning tips, general tutorials, etc.

*Spark logo is a registered trademark of Apache Spark.

rss_feed Subscribe RSS