The SQL programming guide provides an example <http://spark.apache.org/docs/latest/sql-programming-guide.html#loading-data-programmatically> for creating a table using Spark SQL:
CREATE TEMPORARY TABLE parquetTableUSING org.apache.spark.sql.parquet OPTIONS ( path "examples/src/main/resources/people.parquet" ) SELECT * FROM parquetTable However, I don’t see where the list of options are fully documented. The JDBC section <http://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases> lists a few specific options beyond just path. Are there other options for different types of “storage” methods? For instance, if I am saving a table to S3, how do I specify to use server side encryption? How do I overwrite existing files? Can I set the number of partitions? Thanks, Dan