eds. L
>
>
>
>
>
>
>
> *From:* Suresh Kumar Sethuramaswamy [mailto:rock...@gmail.com]
> *Sent:* 13 December 2016 14:19
> *To:* user@hive.apache.org
> *Subject:* Re: PARTITION error because different columns size
>
>
>
> Hi Joaquin
>
>
>
>
@hive.apache.org
Subject: Re: PARTITION error because different columns size
Hi Joaquin
In hive , when u run 'select * from employee' it is going to return the
partitioned columns also at the end, whereas you don't want that to be
inserted into ur ORC table , so ur insert query s
Hi Joaquin,
Suresh was faster than me ...
Also, you should check this :
https://cwiki.apache.org/confluence/display/Hive/Tutorial#Tutorial-Dynamic-PartitionInsert
On Tue, Dec 13, 2016 at 3:19 PM, Suresh Kumar Sethuramaswamy <
rock...@gmail.com> wrote:
> Hi Joaquin
>
> In hive , when u run '
Hi Joaquin
In hive , when u run 'select * from employee' it is going to return the
partitioned columns also at the end, whereas you don't want that to be
inserted into ur ORC table , so ur insert query should look like
INSERT INTO TABLE employee_orc PARTITION (country='USA',
office='HQ
Hi List
I change Spark to 2.0.2 and Hive 2.0.1.
I have the bellow tables but the INSERT INTO TABLE employee_orc PARTITION
(country='USA', office='HQ-TX') select * from employee where country='USA' and
office='HQ-TX';
Is giving me --> Cannot insert into table `default`.`employee_orc` because the