Spark SQL now supports Hive style dynamic partitioning:
https://cwiki.apache.org/confluence/display/Hive/DynamicPartitions

This is a new feature so you'll have to build master or wait for 1.2.

On Wed, Oct 22, 2014 at 7:03 PM, raymond <rgbbones.m...@gmail.com> wrote:

> Hi
>
>         I have a json file that can be load by sqlcontext.jsonfile into a
> table. but this table is not partitioned.
>
>         Then I wish to transform this table into a partitioned table say
> on field “date” etc. what will be the best approaching to do this?  seems
> in hive this is usually done by load data into a dedicated partition
> directly. but if I don’t want to select data out by a specific partition
> then insert it with each partition field value. How should I do it in a
> quick way? And how to do it in Spark sql?
>
> raymond
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to