In Spark, SparkContext.parallelize function can be used to convert list of objects to RDD and then RDD can be converted to DataFrame object through SparkSession.

Similar to PySpark, we can use SparkContext.parallelize function to create RDD; alternatively we can also use SparkContext.makeRDD function to convert list to RDD.

The output looks like the following:

+----------+-----+------------------+

|  Category|Count|       Description|

+----------+-----+------------------+

|Category A|  100|This is category A|

|Category B|  120|This is category B|

|Category C|  150|This is category C|

+----------+-----+------------------+

Code snippet

import org.apache.spark.sql._
import org.apache.spark.sql.types._

val appName = "Scala Example - List to Spark Data Frame"
val master = "local"

/*Create Spark session with Hive supported.*/
val spark = SparkSession.builder.appName(appName).master(master).getOrCreate()

/* List */
val data = List(Row("Category A", 100, "This is category A"),
Row("Category B", 120, "This is category B"),
Row("Category C", 150, "This is category C"))

val schema = StructType(List(
  StructField("Category", StringType, true),
StructField("Count", IntegerType, true),
StructField("Description", StringType, true)
))

/* Convert list to RDD */
val rdd = spark.sparkContext.parallelize(data)

/* Create data frame */
val df = spark.createDataFrame(rdd, schema)
print(df.schema)
df.show()
info Last modified by Raymond at 13 months ago * This page is subject to Site terms.

More from Kontext

PySpark Read Multiple Lines Records from CSV

local_offer pyspark local_offer spark-2-x local_offer python

visibility 444
thumb_up 0
access_time 4 months ago

CSV is a common format used when extracting and exchanging data between systems and platforms. Once CSV file is ingested into HDFS, you can easily read them as DataFrame in Spark. However there are a few options you need to pay attention to especially if you source file: Has records ac...

open_in_new Spark + PySpark

local_offer pyspark local_offer spark-2-x local_offer teradata local_offer SQL Server

visibility 1330
thumb_up 0
access_time 4 months ago

In my previous article about  Connect to SQL Server in Spark (PySpark) , I mentioned the ways t...

open_in_new Spark + PySpark

Spark Read from SQL Server Source using Windows/Kerberos Authentication

local_offer pyspark local_offer SQL Server local_offer spark-2-x

visibility 502
thumb_up 0
access_time 6 months ago

In this article, I am going to show you how to use JDBC Kerberos authentication to connect to SQL Server sources in Spark (PySpark). I will use  Kerberos connection with principal names and password directly that requires  ...

open_in_new Spark + PySpark

Schema Merging (Evolution) with Parquet in Spark and Hive

local_offer parquet local_offer pyspark local_offer spark-2-x local_offer hive local_offer hdfs

visibility 2305
thumb_up 0
access_time 6 months ago

Schema evolution is supported by many frameworks or data serialization systems such as Avro, Orc, Protocol Buffer and Parquet. With schema evolution, one set of data can be stored in multiple files with different but compatible schema. In Spark, Parquet data source can detect and merge sch...

open_in_new Spark + PySpark

info About author

comment Comments (0)

comment Add comment

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

No comments yet.

Dark theme mode

Dark theme mode is available on Kontext.

Learn more arrow_forward

Kontext Column

Created for everyone to publish data, programming and cloud related articles. Follow three steps to create your columns.


Learn more arrow_forward