If you are writing to an existing hive table, our insert into operator
follows hive's requirement, which is
"*the dynamic partition columns must be specified last among the columns in
the SELECT statement and in the same order** in which they appear in the
PARTITION() clause*."
You can find requir
Are you writing to an existing hive orc table?
On Wed, Jun 17, 2015 at 3:25 PM, Cheng Lian wrote:
> Thanks for reporting this. Would you mind to help creating a JIRA for this?
>
>
> On 6/16/15 2:25 AM, patcharee wrote:
>
>> I found if I move the partitioned columns in schemaString and in Row to
Thanks for reporting this. Would you mind to help creating a JIRA for this?
On 6/16/15 2:25 AM, patcharee wrote:
I found if I move the partitioned columns in schemaString and in Row
to the end of the sequence, then it works correctly...
On 16. juni 2015 11:14, patcharee wrote:
Hi,
I am using
I found if I move the partitioned columns in schemaString and in Row to
the end of the sequence, then it works correctly...
On 16. juni 2015 11:14, patcharee wrote:
Hi,
I am using spark 1.4 and HiveContext to append data into a partitioned
hive table. I found that the data insert into the tab
Hi,
I am using spark 1.4 and HiveContext to append data into a partitioned
hive table. I found that the data insert into the table is correct, but
the partition(folder) created is totally wrong.
Below is my code snippet>>
---