Spark Partition Discovery
Spark supports partition discovery. All built in file sources (Text/CSV/JSON/ORC/Parquet) supports partition discovery and partition information inference.
This data shows a example data set that is stored by two partition levels: month and country.
The following code snippet will read all the underlying parquet files:
df = spark.read.option("basePath","/data").parquet("/data")
info Last modified by Raymond 3 years ago
copyright
This page is subject to Site terms.
comment Comments
No comments yet.