Possible way to solve this problem is doing compaction manually.
It will remove datas marked deleted oermanently.
Hope to helpful.
2013. 1. 4. 오후 12:01에 "tgh" 님이 작성:
> Hi
> I try to use hbase and hdfs to store data, and I use 5node, and
> now , the disk is full, so we need to add a new n
Hi
I try to use hbase and hdfs to store data, and I use 5node, and now ,
the disk is full, so we need to add a new node to hdfs, and now, the disk of
the old nodes is full, so data node can not split for new region , and how to
deal with it,
Could you help me
Thank you
---
HI,
awfully appreciate to you all.
i prefer the network bandwidth problem, cause it is easy to work.
Thanks!
beatls.
On Fri, Jan 4, 2013 at 2:32 AM, Bryan Beaudreault
wrote:
> It would delay this problem at least. At the end of the day your
> application and use case is what
Hi Nicolas,
I'm done with the update. I have implemented HRegionInfo.getStartKey
to remove the loop in locateRegion(final byte[] regionName). Tests are
running and so far, so good. I will update the JIRA when the tests
will be done.
JM
2013/1/3, Jean-Marc Spaggiari :
> I found HRegionInfo.getTab
I found HRegionInfo.getTableName(regionName) to get the table name
quickly. Still searching for the startKey.
HRegionInfo.getStartKey(regionName) doesn't exist. Maybe I will have
to create it...
2013/1/3, Jean-Marc Spaggiari :
> Done. You can take a look at what I pushed.
>
> https://issues.apache
Done. You can take a look at what I pushed.
https://issues.apache.org/jira/browse/HBASE-7488
Regarding locateRegion(final byte[] regionName) I don't know if there
is a faster way to get the first row and the table name from the
regionName. Region name should already contain those 2 information. S
I will take a look at all of that and keep you posted shortly.
JM
2013/1/3, Nicolas Liochon :
> Yep, I'm ok with that. It will need to be put in the interface (vs. the
> implementation class). Would be nice if you could implement the two missing
> methods (i.e. public HRegionLocation locateRegion
Yep, I'm ok with that. It will need to be put in the interface (vs. the
implementation class). Would be nice if you could implement the two missing
methods (i.e. public HRegionLocation locateRegion(final byte [] regionName))
On Thu, Jan 3, 2013 at 7:33 PM, Jean-Marc Spaggiari wrote:
> public L
I have created HBASE-7488 to implement it.
Also, I think we should add this in the interface and do the related
implementation:
/**
* Gets the locations of all regions in the specified table, tableName.
* @param tableName table to get regions of
* @param offlined True if we are to incl
It would delay this problem at least. At the end of the day your
application and use case is what determines what you should do. It seems to
me you may be hot spotting this region. Perhaps you should check your
schema to see if there is something you can do to better distribute your
writes.
If y
HI,
I think you have this experience, and i plan to increase the network
bandwith.
but some one told me that increase the
'hbase.hregion.memstore.block.multiplier" from default 2 to 4 could solve
this problem.
which way to go?
Thanks!
bealts
On Fri, Jan 4, 2013 at 1:37 AM, Asa
When I encountered this error it was caused by slow network which caused
hdfs to be slow which caused flush to take more time thus blocking updates
occurred.
Sent from my iPhone
On 3 בינו 2013, at 18:25, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
You need to see your applica
You need to see your application to see how much is your rate of puts.
Also how many column families do you have?
All these factors will help you in determining your memstore size.
Regards
Ram
On Thu, Jan 3, 2013 at 9:51 PM, hua beatls wrote:
> HI,
>what i mean is shoud we increate the mem
HI,
what i mean is shoud we increate the memstore according to this log?
Thanks!
beatls.
On Thu, Jan 3, 2013 at 8:37 PM, Anoop Sam John wrote:
>
> I guess you mean "what is blocking memstore size"
> This you can configure using 2 properties.
> 1. hbase.hregion.memstore.flush.size ->
I guess you mean "what is blocking memstore size"
This you can configure using 2 properties.
1. hbase.hregion.memstore.flush.size -> Using this you specify when the
memstore to be flushed as a file. The default value is 128MB. So when the
region memstore size reaches this value a flush will be
HI,
what is the block size? how to set it ? and shoud i increate it for
the memstore ?
Thanks!
beatls
On Thu, Jan 3, 2013 at 6:23 PM, Anoop Sam John wrote:
> The writes to your table will be written 1st to in memory data structure
> (memstore). When this memstore reaches some defi
+1
Congrats and good on you!
On Jan 2, 2013, at 9:02 PM, Stack wrote:
> Good on you lads. Thanks for all the great contribs so far.
> St.Ack
>
>
> On Wed, Jan 2, 2013 at 11:37 AM, Jonathan Hsieh wrote:
>
>> Along with bringing in the new year, we've brought in two new Apache
>> HBase Comm
Hi,
It will work, but there is some glue code to write as one is returning one
region given a rowkey, while the not implemented one returns all the
version.
Code written by Lyska seems fine, we could put it in locateRegions (doing
this server side is more efficient)
Nicolas
On Thu, Jan 3, 2013
The writes to your table will be written 1st to in memory data structure
(memstore). When this memstore reaches some defined size it will be flushed
into the file system and the memory can be cleared.
The flush operation may take some time as IO write is involved, During this
time also the clie
Hi,
thanks a lot!
as Nicolas wrote, I use
List regions = MetaScanner.listAllRegions(config);
for (HRegionInfo info : regions) {
HRegionLocation loc = connection.locateRegion(info.getTableName(),
info.getStartKey());
}
03.01.2013 2:34, Jean-M
20 matches
Mail list logo