Convert String to Date in Spark (Scala)

access_time 3 years ago visibility9524 comment 0

Context

This pages demonstrates how to convert string to java.util.Date in Spark via Scala.

Prerequisites

If you have not installed Spark, follow the page below to install it:

Install Big Data Tools (Spark, Zeppelin, Hadoop) in Windows for Learning and Practice

Sample code

The following code snippet uses pattern yyyy-MM-dd to parse string to Date.

import java.text.SimpleDateFormat

import java.util.Date

val format = new SimpleDateFormat("yyyy-MM-dd")

val date = format.parse("2018-03-03")

Output

image

Retrieve week day name

scala> val format2 = new SimpleDateFormat("EEEEE")
format2: java.text.SimpleDateFormat = java.text.SimpleDateFormat@3ecbf05

scala> format2.format(date)
res0: String = Saturday

Summary

For the complete list of Java date and time pattern, please refer to the following link.

https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html

local_offer lite-log local_offer scala
info Last modified by Raymond at 3 years ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Want to publish your article on Kontext?

Learn more

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.


Learn more arrow_forward

More from Kontext

local_offer scala local_offer spark-2-x

visibility 5808
thumb_up 1
access_time 12 months ago

In Spark, SparkContext.parallelize function can be used to convert list of objects to RDD and then RDD can be converted to DataFrame object through SparkSession.

local_offer spark local_offer hdfs local_offer scala local_offer parquet local_offer spark-file-operations

visibility 15389
thumb_up 0
access_time 3 years ago

In my previous post, I demonstrated how to write and read parquet files in Spark/Scala. The parquet file destination is a local folder. Write and Read Parquet Files in Spark/Scala In this page, I am going to demonstrate how to write and read parquet files in HDFS. import ...

local_offer spark local_offer scala

visibility 340
thumb_up 0
access_time 12 months ago

Parquet is columnar store format published by Apache. It's commonly used in Hadoop ecosystem. There are many programming language APIs that have been implemented to support writing and reading parquet files. 

About column

Apache Spark installation guides, performance tuning tips, general tutorials, etc.

*Spark logo is a registered trademark of Apache Spark.

rss_feed Subscribe RSS