ah, thanks Erek.
I can easily handle this in where clause.
Thanks much.
Sonia
On Wed, Aug 29, 2012 at 2:52 PM, Erek Dyskant wrote:
> select blah
> from table a
> Left Join table b *on (a.col1 = b.col2)*
> *where b.col2 is not null or a.col1 = 0*
>
>
> On Wed, Aug 29, 2012 at 5:44 PM, sonia geh
select blah
from table a
Left Join table b *on (a.col1 = b.col2)*
*where b.col2 is not null or a.col1 = 0*
On Wed, Aug 29, 2012 at 5:44 PM, sonia gehlot wrote:
> Hi All,
>
> I am joining 2 tables in hive, with or condition. for example:
>
> select blah
> from table a
> Join table b
> *on (a.col1
How do you join two tables that aren't represented in both sides of the =?
Can you describe a bit more of what you are trying to get out of the data?
I am having a hard time wrapping my head around this...
On Wed, Aug 29, 2012 at 4:44 PM, sonia gehlot wrote:
> Hi All,
>
> I am joining 2 table
Hi All,
I am joining 2 tables in hive, with or condition. for example:
select blah
from table a
Join table b
*on (a.col1 = b.col2 or a.col2 = 0)*
*
*
but this is giving me error that OR not supported in hive currently.
Any suggestion how I can handle this in hive query.
Thanks,
Sonia
Unfortunately get_json_object return the string and not a Hive array.
On Wed, Aug 29, 2012 at 5:30 PM, Tom Brown wrote:
> I believe the "get_json_object" function will be suitable (though I've
> never personally tried it in the exact context you describe.)
>
> https://cwiki.apache.org/confluence/
I am running some data that isn't huge persay, but I performing processing
on it to get into my final table (RCFile).
One of the challenges is that it comes in large blocks of data, for
example, I may have a 70MB chunk of binary data that I want to put in. My
process that generates this data hexes
I believe the "get_json_object" function will be suitable (though I've
never personally tried it in the exact context you describe.)
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-getjsonobject
--Tom
On Wed, Aug 29, 2012 at 7:49 AM, Aleksei Udatšnõi wrote:
I have a Hive column with JSON string like:
["9835","29825","73432","51365","14114","6527","779"]
I want to explode() it and use with the LATERAL VIEW. For that it
needs to be a Hive array. Is there any quick way to convert this
string into a Hive array?
Hi,
I had same error few days back.
Now difficulty we have is to find which gz file is corrupt. Its not corrupt
as such but some how hadoop says it is. If you made the file in Windows and
then transfer to hadoop of can give. This error. If you want to see which
file is corrupt do select count que
Hi,
Sorry, it works now. Thank you.
But, the value is not correct. (about half of real number of rows.)
Is this sampled value ?
It seems counting every row as far as i checked TableScanOperator.java .
Thanks,
Hiroyuki
On Wed, Aug 29, 2012 at 5:39 PM, Hiroyuki Yamada wrote:
> Hi,
>
> Thank you
Hi,
Thank you for the reply.
I tried with the following setting, but I got the same result. (with num_rows=0)
hive.stats.dbconnectionstring=jdbc:derby:;databaseName=/tmp/TempStatsStore;create=true
Is there any clue ?
On Wed, Aug 29, 2012 at 4:09 PM, rohithsharma wrote:
> I resolved the issue w
I resolved the issue with following way.
Configure
"hive.stats.dbconnectionstring=jdbc:derby:;databaseName=/home/TempStore".
This works only in single node cluster.
Please check HIVE-3324.
-Original Message-
From: Hiroyuki Yamada [mailto:mogwa...@gmail.com]
Sent: Wednesday, August 29
12 matches
Mail list logo