Spark supports partition discovery. All built in file sources (Text/CSV/JSON/ORC/Parquet) supports partition discovery and partition information inference.
This data shows a example data set that is stored by two partition levels: month and country.
The following code snippet will read all the underlying parquet files:
df = spark.read.option("basePath","/data").parquet("/data")