Right now I believe the only supported option is to pass a comma-delimited
list of paths.

I've opened SPARK-3928: Support wildcard matches on Parquet files
<https://issues.apache.org/jira/browse/SPARK-3928> to request this feature.

Nick

On Mon, Oct 13, 2014 at 12:21 PM, Sadhan Sood <sadhan.s...@gmail.com> wrote:

> How can we read all parquet files in a directory in spark-sql. We are
> following this example which shows a way to read one file:
>
> // Read in the parquet file created above.  Parquet files are self-describing 
> so the schema is preserved.// The result of loading a Parquet file is also a 
> SchemaRDD.val parquetFile = sqlContext.parquetFile("people.parquet")
> //Parquet files can also be registered as tables and then used in SQL 
> statements.parquetFile.registerTempTable("parquetFile")
>
>

Reply via email to