Hello,
Currently HConnectionManager hold a LRU cache of hbase connections
with max cache size set to 31(hard coded!). I am confused:
1) what is the reason to use LRU cache? It is error prone. if I
create more than 31 connection, old connection get silently discarded,
in the meantime deleteCo
; your table. You then say how often to run sync using
> 'hbase.regionserver.optionallogflushinterval'. Default is sync every
> second.
>
> St.Ack
>
> On Sat, May 28, 2011 at 6:47 AM, Qing Yan wrote:
> > Well, I realized myself RS flush to HDFS is not designed t
t want to look into bulk loading.
>>
>> -Joey
>> On May 28, 2011 9:47 AM, "Qing Yan" wrote:
>>> Well, I realized myself RS flush to HDFS is not designed to do
>>> incremental
>>> changes. So there is no way around of WAL? man..just wish it ca
Well, I realized myself RS flush to HDFS is not designed to do incremental
changes. So there is no way around of WAL? man..just wish it can run a bit
faster:-P
On Sat, May 28, 2011 at 9:36 PM, Qing Yan wrote:
> Ok, thanks for the explaination. so data loss is normal in this case.
> Yeah ,
Ok, thanks for the explaination. so data loss is normal in this case.
Yeah , I did a "kill -9". I did wait till the RS get reassigned and
actually let process B keep retring over the night ..
Is WAL the only way to guarantee data safety in hbase? We want high insert
rate though.
Is there a middle
Hello,
I found something strange, here is the test case:
1) Process A insert data into a particular hbase region, WAL off, AutoFlush
off
2) Process A issues htable.flushCommits(), no exception thrown, write down
the row key.
4) Kill the region server manually
5) Process B query the row key, but