By using this site, you acknowledge that you have read and understand our Cookie and Privacy policy. Your use of Kontext website is subject to this policy. Accept

Read Text File from Hadoop in Zeppelin through Spark Context

2893 views last modified about 2 years ago Raymond Tang

zeppelin spark hadoop rdd

Background

This page provides an example to load text file from HDFS through SparkContext in Zeppelin (sc).

Reference

The details about this method can be found at:

SparkContext.textFile

https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/SparkContext.html#textFile-java.lang.String-int-

SqlContext

https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/sql/SQLContext.html

Prerequisites

Hadoop and Zeppelin

Refer to the following page to install Zeppelin and Hadoop in your environment if you don’t have one to play with.

Install Big Data Tools (Spark, Zeppelin, Hadoop) in Windows for Learning and Practice

Sample text file

In this example, I am going to use the file created in this tutorial:

Create a local CSV file

Step by step guide

Create a new note

Create a new note in Zeppelin with Note Name as ‘Test HDFS’:

image

Create data frame using RDD.toDF function

%spark
import spark.implicits._

// Read file as RDD
val rdd=sc.textFile("hdfs://0.0.0.0:19000/Sales.csv")

// Convert rdd to dataframe using toDF
val df = rdd.toDF
z.show(df)

The output:

image

As shown in the above screenshot, each line is converted to one row.

Let’s convert the string rows to string tuples.

Read CSV using spark.read

%spark
val df = spark.read.format("csv").option("header", "true").load("hdfs://0.0.0.0:19000/Sales.csv")
z.show(df)

image

Alternative method for converting RDD<String> to DataFrame

For previous Spark versions, you may need to convert RDD<String> to DataFrame using map functions.

%spark
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.SQLContext
//import spark.implicits._
import java.text.SimpleDateFormat
import java.util.Date

// Read file as RDD
val rdd=sc.textFile("hdfs://0.0.0.0:19000/Sales.csv")
val header = rdd.first()
val records = rdd.filter(row => row != header)

// create a data row
def row(line: List[String]): Row = { Row(line(0), line(1).toDouble) }

def dfSchema(columnNames: List[String]): StructType = {
  StructType(
      Seq(StructField("MonthOld", StringType, true),
      StructField("Amount", DoubleType, false))
      )
}
     
val headerColumns = header.split(",").to[List]    
val schema = dfSchema(headerColumns)
val data = records.map(_.split(",").to[List]).map(row)

//val df = spark.createDataFrame(data, schema)
//or
val df = new SQLContext(sc).createDataFrame(data, schema)
val df2 = df.withColumn("Month", from_unixtime(unix_timestamp($"MonthOld","dd/MM/yyyy"),"yyyy-MM-dd")).drop("MonthOld")

z.show(df2)

The result is similar to the previous one except the date format is also converted:

image

Related pages

Debug PySpark Code in Visual Studio Code

24 views   0 comments last modified about 17 days ago

The page summarizes the steps required to run and debug PySpark (Spark for Python) in Visual Studio Code. Install Python and pip Install Python from the official website: https://...

View detail

Implement SCD Type 2 Full Merge via Spark Data Frames

336 views   0 comments last modified about 2 months ago

Overview For SQL developers that are familiar with SCD and merge statements, you may wonder how to implement the same in big data platforms, considering database or storages in Hadoop are not designed/optimised for record level updates and inserts. In this post, I’m going to demons...

View detail

Password Security Solution for Sqoop

37 views   0 comments last modified about 3 months ago

In Sqoop, there are multiple approaches to pass in passwords for RDBMS. Options Option 1 - clear password through --password argument sqoop [subcommand] --username user --password pwd This is the weakest approach as password is exposed directly...

View detail

PySpark: Convert JSON String Column to Array of Object (StructType) in Data Frame

442 views   0 comments last modified about 3 months ago

This post shows how to derive new column in a Spark data frame from a JSON array string column. I am running the code in Spark 2.2.1 though it is compatible with Spark 1.6.0 (with less JSON SQL functions). Prerequisites Refer to the following post to install Spark in Windows. ...

View detail

Install Zeppelin 0.7.3 in Windows

2476 views   6 comments last modified about 2 years ago

This post summarizes the steps to install Zeppelin 0.7.3 in Windows environment. Tools and Environment GIT Bash Command Prompt Windows 10 Download Binary Package Download the latest binary package from the following website: ...

View detail

Install Hadoop 3.0.0 in Windows (Single Node)

12914 views   14 comments last modified about 2 years ago

This page summarizes the steps to install Hadoop 3.0.0 in your Windows environment. Reference page: https://wiki.apache.org/hadoop/Hadoop2OnWindows ...

View detail

Add comment

Comments (0)

No comments yet.

Contacts

  • enquiry[at]kontext.tech

Subscribe