52 rate: 0.734
>
> 2012-04-05 10:41:07 Processing rows:290 Hashtable size:
> 289 Memory usage: 1062065576 rate: 0.76****
>
> Exception in thread "Thread-1" java.lang.OutOfMemoryError: Java heap space
>
>
> ** **
>
>
h (Mr)
Cell phone: (+84)98.226.0622
From: Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Thursday, April 05, 2012 5:36 PM
To: user@hive.apache.org
Subject: Re: Why BucketJoinMap consume too much memory
can you try adding these settings
set hive.enforc
But it still created many hash tables then threw Java Heap space error
>
> ** **
>
> *Best regards*
>
> Nguyen Thanh Binh (Mr)
>
> Cell phone: (+84)98.226.0622
>
> ** **
>
> *From:* Bejoy Ks [mailto:bejoy...@yahoo.com]
> *Sent:* Thursday, April 05
user@hive.apache.org
Subject: Re: Why BucketJoinMap consume too much memory
Hi Amit
Sorry for the delayed response, had a terrible schedule. AFAIK, there
is no flags that would help you to take the hash table creation, compression
and load into tmp files away from client node.
ze is too large, than the heap size
specified for your client, it'd throw an out of memory.
Regards
Bejoy KS
From: Amit Sharma
To: user@hive.apache.org; Bejoy Ks
Sent: Tuesday, April 3, 2012 11:06 PM
Subject: Re: Why BucketJoinMap consume too much memo
This would
> definitely blow your jvm . Bottom line is ensure your mappers are not
> heavily loaded with the bucketed data distribution.
>
> Regards
> Bejoy.K.S
> --
> *From:* binhnt22
> *To:* user@hive.apache.org
> *Sent:* Saturday, March 31,
the bucketed data distribution.
Regards
Bejoy.K.S
From: binhnt22
To: user@hive.apache.org
Sent: Saturday, March 31, 2012 6:46 AM
Subject: Why BucketJoinMap consume too much memory
I
have 2 table, each has 6 million records and clustered into 10 buckets
Th
I have 2 table, each has 6 million records and clustered into 10 buckets
These tables are very simple with 1 key column and 1 value column, all I
want is getting the key that exists in both table but different value.
The normal did the trick, took only 141 secs.
select * from ra_md_cdr