Ho John,
You might be better to ask this on the CDH mailing list since it's more
related to Cloudera Manager than HBase.
In the meantime, can you try to update the "Map Task Maximum Heap Size"
parameter too?
JM
2013/11/1 John
> Hi,
>
> I have a problem with the memory. My use case is the fol
It reads that it spent 89 seconds doing a CMS concurrent mark, but really
just spent 14 seconds of user CPU and 4 seconds of system CPU doing it.
Where are the other 70 seconds? It's often just swapping, and less likely
it can also be CPU starvation.
J-D
On Fri, Nov 1, 2013 at 1:40 AM, Asaf Mesi
Hi,
I have a problem with the memory. My use case is the following: I've crated
a MapReduce-job and iterate in this over every row. If the row has more
than for example 10k columns I will create a bloomfilter (a bitSet) for
this row and store it in the hbase structure. This worked fine so far.
BU
Hi Asaf,
Might be a better question for the CDH mailing list ;)
The good answer is: CDH4.4 works well with Java 7, but it's Java 6 which is
recommended and officially supported. If you want to use it with Java 7,
you can definitively do that.
JM
2013/11/1 Asaf Mesika
> Hi,
>
> I've been read
Can you please explain why is this suspicious?
On Monday, October 7, 2013, Jean-Daniel Cryans wrote:
> This line:
>
> [CMS-concurrent-mark: 12.929/88.767 secs] [Times: user=14.30 sys=3.74,
> real=88.77
> secs]
>
> Is suspicious. Are you swapping?
>
> J-D
>
>
> On Mon, Oct 7, 2013 at 8:34 AM, prak
Hi,
I've been reading here that HBase 0.94.x has been working in production for
a few folks here with Java7.
I also read that CDH4.4 is not recommended to work with Java 7 in
production.
Anybody has any idea why?
How many Parallel GC were you using?
Regarding block cache - just to see I understood this right: if your are
doing a massive read in HBase it's better to turn off block caching through
the Scan attribute?
On Thursday, October 10, 2013, Otis Gospodnetic wrote:
> Hi Ramu,
>
> I think I saw mentio
Bucket seems like a rather good name for it. The method for generating
could be Hash, running sequence modded, etc. So HashBucket,
RoundRobinBucket, etc.
On Tuesday, October 22, 2013, James Taylor wrote:
> One thing I neglected to mention is that the table is pre-split at the
> "prepending-row-ke