PySpark Read Multiple Lines Records from CSV

access_time 11 months ago visibility2858 comment 0

CSV is a common format used when extracting and exchanging data between systems and platforms. Once CSV file is ingested into HDFS, you can easily read them as DataFrame in Spark. However there are a few options you need to pay attention to especially if you source file:

  • Has records across multiple lines.
  • Has escaped characters in the field.
  • Fields contain delimiters.

This page shows you how to handle the above scenarios in Spark by using Python as programming language. If you prefer Scala or other Spark compatible languages, the APIs are very similar. 

Sample data file

The CSV file content looks like the followng:

ID,Text1,Text2
1,Record 1,Hello World!
2,Record 2,Hello Hadoop!
3,Record 3,"Hello 
Kontext!"
4,Record 4,Hello!
For the third record, field Text2 is across two lines.
The file is ingested into my Hadoop instance with location as:
hadoop fs -copyFromLocal data.csv /data.csv

Normal CSV file read

Let's create a python script using the following code:

from pyspark.sql import SparkSession

appName = "Python Example - PySpark Read CSV"
master = 'local'

# Create Spark session
spark = SparkSession.builder \
    .master(master) \
    .appName(appName) \
    .getOrCreate()

# Convert list to data frame
df = spark.read.format('csv') \
                .option('header',True) \
                .option('sep', ',') \
                .load('/data.csv')
df.show()

In the above code snippet, we used 'read' API with CSV as the format and specified the following options:

  • header = True: this means there is a header line in the data file.
  • sep=, : comma is the delimiter/separator. Since our file is using comma, we don't need to specify this as by default is is comma.

The output looks like the following:

+---------+--------+-------------+
|       ID|   Text1|        Text2|
+---------+--------+-------------+
|        1|Record 1| Hello World!|
|        2|Record 2|Hello Hadoop!|
|        3|Record 3|       Hello |
|Kontext!"|    null|         null|
|        4|Record 4|       Hello!|
+---------+--------+-------------+

This isn't what we are looking for as it doesn't parse the multiple lines record correct.

Read multiple line records

It's very easy to read multiple line records CSV in spark and we just need to specify multiLine option as True.

from pyspark.sql import SparkSession

appName = "Python Example - PySpark Read CSV"
master = 'local'

# Create Spark session
spark = SparkSession.builder \
    .master(master) \
    .appName(appName) \
    .getOrCreate()

# Convert list to data frame
df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .load('/data.csv')
df.show()
print(f'Record count is: {df.count()}')

The output looks like the following:

+---+--------+---------------+
| ID|   Text1|          Text2|
+---+--------+---------------+
|  1|Record 1|   Hello World!|
|  2|Record 2|  Hello Hadoop!|
|  3|Record 3|Hello
Kontext!|
|  4|Record 4|         Hello!|
+---+--------+---------------+
Record count is: 4

Different quote character 

Let's imagine the data file content looks like the following (double quote is replaced with @):

ID,Text1,Text2
1,Record 1,Hello World!
2,Record 2,Hello Hadoop!
3,Record 3,@Hello 
Kontext!@
4,Record 4,Hello!
Even we specify multiLine option, our previous script still read it as 5 records.
To fix this, we can simply specify another very useful option 'quote':
from pyspark.sql import SparkSession

appName = "Python Example - PySpark Read CSV"
master = 'local'

# Create Spark session
spark = SparkSession.builder \
    .master(master) \
    .appName(appName) \
    .getOrCreate()

# Convert list to data frame
df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .option('quote','@') \
                .load('/data.csv')
df.show()
print(f'Record count is: {df.count()}')

The output looks like the following:

+---+--------+---------------+
| ID|   Text1|          Text2|
+---+--------+---------------+
|  1|Record 1|   Hello World!|
|  2|Record 2|  Hello Hadoop!|
|  3|Record 3|Hello
Kontext!|
|  4|Record 4|         Hello!|
+---+--------+---------------+

Escape double quotes

Another common used option is the escape character.

Let's assume your CSV content looks like the following:

ID,Text1,Text2
1,Record 1,Hello World!
2,Record 2,Hello Hadoop!
3,Record 3,"Hello 
""Kontext""!"
4,Record 4,Hello!

Let's change the read function to use the default quote character '"':

df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .option('quote','"') \
                .load('/data.csv')

It doesn't read the content properly though the record count is correct:

+---+--------+--------------------+
| ID|   Text1|               Text2|
+---+--------+--------------------+
|  1|Record 1|        Hello World!|
|  2|Record 2|       Hello Hadoop!|
|  3|Record 3|"Hello
""Kontext...|
|  4|Record 4|              Hello!|
+---+--------+--------------------+

To fix this, we can just specify the escape option:

df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .option('quote','"') \
                .option('escape','"') \
                .load('/data.csv')

It will output the correct format we are looking for:

+---+--------+-----------------+
| ID|   Text1|            Text2|
+---+--------+-----------------+
|  1|Record 1|     Hello World!|
|  2|Record 2|    Hello Hadoop!|
|  3|Record 3|Hello
"Kontext"!|
|  4|Record 4|           Hello!|
+---+--------+-----------------+

If you escape character is different, you can also specify it accordingly.

Multiple character quotes

If your attributes are quoted using multiple characters in CSV,  unfortunately this CSV ser/deser doesn't support that.

For example, let's assume the field is quoted with double double quotes:

ID,Text1,Text2
1,Record 1,Hello World!
2,Record 2,Hello Hadoop!
3,Record 3,""Hello 
Kontext!""
4,Record 4,Hello!

We will encounter one error if we use the following code to read it:

df = spark.read.format('csv') \
                .option('header',True) \
                .option('multiLine', True) \
                .option('quote','""') \
                .option('escape','"') \
                .load('/data.csv')

Error:

java.lang.RuntimeException: quote cannot be more than one character

Similarly, for escape character, it only supports one character.

To resolve these problems, you need to implement your own text file deserializer. 

info Last modified by Administrator 6 months ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

More from Kontext

visibility 441
thumb_up 0
access_time 10 months ago

To read data from SQLite database in Python, you can use the built-in sqlite3 package . Another approach is to use SQLite JDBC driver via  JayDeBeApi  python package. Download the JAR file from one of the online repositories: Maven Repository BitBucket or any other equivalent ...

visibility 1634
thumb_up 0
access_time 2 years ago

Spark has easy fluent APIs that can be used to read data from JSON file as DataFrame object. 

visibility 33968
thumb_up 0
access_time 2 years ago

In Spark, SparkContext.parallelize function can be used to convert Python list to RDD and then RDD can be converted to DataFrame object. The following sample code is based on Spark 2.x. In this page, I am going to show you how to convert the following list to a data frame: data = [('Category A' ...