Background

This page provides an example to load text file from HDFS through SparkContext in Zeppelin (sc).

Reference

The details about this method can be found at:

SparkContext.textFile

https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/SparkContext.html#textFile-java.lang.String-int-

SqlContext

https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/sql/SQLContext.html

Prerequisites

Hadoop and Zeppelin

Refer to the following page to install Zeppelin and Hadoop in your environment if you don’t have one to play with.

Install Big Data Tools (Spark, Zeppelin, Hadoop) in Windows for Learning and Practice

Sample text file

In this example, I am going to use the file created in this tutorial:

Create a local CSV file

Step by step guide

Create a new note

Create a new note in Zeppelin with Note Name as ‘Test HDFS’:

image

Create data frame using RDD.toDF function

%spark
import spark.implicits._

// Read file as RDD
val rdd=sc.textFile("hdfs://0.0.0.0:19000/Sales.csv")

// Convert rdd to dataframe using toDF
val df = rdd.toDF
z.show(df)

The output:

image

As shown in the above screenshot, each line is converted to one row.

Let’s convert the string rows to string tuples.

Read CSV using spark.read

%spark
val df = spark.read.format("csv").option("header", "true").load("hdfs://0.0.0.0:19000/Sales.csv")
z.show(df)

image

Alternative method for converting RDD<String> to DataFrame

For previous Spark versions, you may need to convert RDD<String> to DataFrame using map functions.

%spark
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.SQLContext
//import spark.implicits._
import java.text.SimpleDateFormat
import java.util.Date

// Read file as RDD
val rdd=sc.textFile("hdfs://0.0.0.0:19000/Sales.csv")
val header = rdd.first()
val records = rdd.filter(row => row != header)

// create a data row
def row(line: List[String]): Row = { Row(line(0), line(1).toDouble) }

def dfSchema(columnNames: List[String]): StructType = {
  StructType(
      Seq(StructField("MonthOld", StringType, true),
      StructField("Amount", DoubleType, false))
      )
}
     
val headerColumns = header.split(",").to[List]    
val schema = dfSchema(headerColumns)
val data = records.map(_.split(",").to[List]).map(row)

//val df = spark.createDataFrame(data, schema)
//or
val df = new SQLContext(sc).createDataFrame(data, schema)
val df2 = df.withColumn("Month", from_unixtime(unix_timestamp($"MonthOld","dd/MM/yyyy"),"yyyy-MM-dd")).drop("MonthOld")

z.show(df2)

The result is similar to the previous one except the date format is also converted:

image

info Last modified by Raymond at 3 years ago copyright This page is subject to Site terms.

More from Kontext

local_offer pyspark local_offer spark

visibility 20
thumb_up 0
access_time 2 days ago

CSV is a commonly used data format. Spark provides rich APIs to load files from HDFS as data frame.  This page provides examples about how to load CSV from HDFS using Spark. If you want to read a local CSV file in Python, refer to this page  ...

open_in_new Spark + PySpark

local_offer linux local_offer hadoop local_offer hdfs local_offer yarn

visibility 21
thumb_up 0
access_time 2 days ago

This article provides step-by-step guidance to install Hadoop 3.3.0 on Linux such as Debian, Ubuntu, Red Hat, openSUSE, etc.  Hadoop 3.3.0 was released on July 14 2020. It is the first release of Apache Hadoop 3.3...

open_in_new Hadoop

Install Hadoop 3.3.0 on Windows 10 Step by Step Guide

local_offer windows10 local_offer hadoop local_offer yarn local_offer hdfs

visibility 90
thumb_up 0
access_time 4 days ago

This detailed step-by-step guide shows you how to install the latest Hadoop v3.3.0 on Windows 10. It leverages Hadoop 3.3.0 winutils tool and WSL is not required. This version was released on July 14 2020. It is the first release of Apache Hadoop 3.3 line. There are significant changes compared with Hadoop 3.2.0, such as Java 11 runtime support, protobuf upgrade to 3.7.1, scheduling of opportunistic containers, non-volatile SCM support in HDFS cache directives, etc.

open_in_new Hadoop

local_offer hadoop local_offer windows10

visibility 27
thumb_up 0
access_time 4 days ago

Winutils is required when installing Hadoop on Windows environment. Hadoop 3.3.0 winutils I've compiled Hadoop 3.3.0 on Windows 10 using CMake and Visual Studio (MSVC x64). Follow these two steps to download it: ...

open_in_new Hadoop

comment Comments (0)

comment Add comment

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

No comments yet.

Kontext Column

Created for everyone to publish data, programming and cloud related articles. Follow three steps to create your columns.


Learn more arrow_forward