Here's my take on the issue.
> I monitored the
> process and when any node fails, it has not used all the heaps yet.
> So it is not a heap space problem.
I disagree. Unless you load a region server heap with more data than
there's heap available (loading batches of humongous rows for
example), it
Hi Ed,
You need to be more precise I am afraid. First of all what does "some node
always dies" mean? Is the process gone? Which process is gone?
And the "error" you pasted is a WARN level log that *might* indicate some
trouble, but is *not* the reason the "node has died". Please elaborate.
Also
Hi,
I've had a problem that has been killing for some days now.
I am using CDH3 update2 version of Hadoop and Hbase.
When I do a large amount of bulk loading into Hbase, some node always die.
It's not just one particular node.
But one of many nodes fail to serve eventually.
I set 4 gigs of heap sp