Make sure you have set the table properties while creating the table
structure.
Also , it should not be a problem unless it is a FixedLength.Try altering
the table to set it to the type desired or else it will be by default
SequenceFile
On Tue, Sep 24, 2013 at 7:51 PM, Artem Ervits wrote:
> A
I was concentrating on your first suggestion and didn't realize that you
already answered my question in the 2nd part of your answer :). Thank you, I
will try that.
From: Artem Ervits [mailto:are9...@nyp.org]
Sent: Tuesday, September 24, 2013 12:59 PM
To: user@hive.apache.org
Subject: RE: load d
I realize that I am using a part file, as far as loading using sqoop, I'm aware
that works but we originally decided to load using sqoop and leaving in hdfs,
i.e. w/out -hive-table flag. So my real question is since we made the decision
to first load into hdfs using sequencefile, is there a way
If you look at your load command,
LOAD DATA INPATH '/TEST/SeqFiles/201308300700/part-m-1' INTO TABLE
tblname;
you are loading a part file which does not look correct.
Secondly,
Why can't you just import using sqoop. Why you have to do load data?
If you are importing to hdfs using sqoop, and t
so msck repair table + dynamic partitioning semantics looks like it fits
the bill for you.
yeah, 300K partitions. That's getting up there on the scale of things with
hive i'd say and close to over-partitioning. for archival purposes maybe
older data doesn't need such a fine grained partition? s
i think we need an example. if the column exists in the meta-data but not
in the data i'm pretty sure hive will take it as a null. no example though
makes it hard to go further into this (at least for this reader.)
On Mon, Sep 23, 2013 at 4:35 PM, Gary Zhao wrote:
> Hello
>
> I'm query a tabl
Anyone?
From: Artem Ervits [mailto:are9...@nyp.org]
Sent: Friday, September 20, 2013 11:18 AM
To: user@hive.apache.org
Subject: load data stored as sequencefiles
Hello all,
I'm a bit lost with using Hive and SequenceFiles. I loaded data using Sqoop
from a RDBMS and stored as sequencefile. I jar
Hi everyone,
Thank you for your answers.
On 24.09.2013, at 0:36, Stephen Sprague wrote:
> If its any help I've done this kind of thing frequently:
>
> 1. create the table on the new cluster.
>
> 2. distcp the data right into the hdfs directory where the table resides on
> the new cluster -
Thanks, I've added a note about case-insensitivity to the UDF doc and the
Tutorial:
- https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF
-
https://cwiki.apache.org/confluence/display/Hive/Tutorial#Tutorial-Builtinoperatorsandfunctions
-- Lefty
On Tue, Sep 3, 2013 at 7