I am not sure about your question.
Do you mean the query runs very fast if you run like 'select * from
hbase_table', but very slow for 'select * from hbase where row_key = ?'
I think it should be the other way round, right?
Yong
Date: Wed, 19 Mar 2014 11:42:39 -0700
From: sunil_ra...@yahoo.com
S
We do not have a firm release date yet. The branch has been cut. I think
Harish said he’d like to have a first RC early next week. It usually takes 1
to 2 weeks after the first RC, depending on any show stoppers found in it, etc.
Alan.
On Mar 19, 2014, at 6:50 AM, Bryan Jeffrey wrote:
> He
Sorry for another post on this thread. I had an error in my pigscript that
had the wrong unicode character to split on. Using STRSPLIT worked well.
On Fri, Mar 21, 2014 at 8:46 AM, Jeff Storey wrote:
> Correction - it looks like the query uses \u002 to separate array elements
> and \u001 to sep
Correction - it looks like the query uses \u002 to separate array elements
and \u001 to separate the other fields. The question is still similar
though in wondering how I can load that array into pig.
Note - If my data is formatted as a tsv with parentheses surrounding the
array:
(element1,elemen
I'm executing a hive query in which one of the fields an array and writing
it to a file using:
INSERT OVERWRITE '/path/to/output' SELECT ...
This query works well. I would like to load this data into pig, but I'm
quite sure how to get the array properly into pig.
My output file from the query do