Thanks Ted. This looks like the issue since I am running it in EMR and the
Hive version is 1.0.0.
Thanks,
Bijay
On Wed, May 4, 2016 at 10:29 AM, Ted Yu wrote:
> Looks like you were hitting HIVE-11940
>
> On Wed, May 4, 2016 at 10:02 AM, Bijay Kumar Pathak
> wrote:
>
>> Hello,
>>
>> I am writi
Looks like you were hitting HIVE-11940
On Wed, May 4, 2016 at 10:02 AM, Bijay Kumar Pathak
wrote:
> Hello,
>
> I am writing Dataframe of around 60+ GB into partitioned Hive Table using
> hiveContext in parquet format. The Spark insert overwrite jobs completes in
> a reasonable amount of time aro