Take a look at the zookeeper session timeout. The ephemeral node of the rs
going down will be deleted when session is expired and then other
regionservers will race to take ownership of the regions being down. The
default session timeout is too high so I think it may be related to the
problem you a
it was fixed in 0.95
>
> On Thursday, November 28, 2013, Pablo Medina wrote:
>
> > Hi all,
> >
> > Knowing that replication metrics are global at the region server level in
> > HBase 0.94.13, what is the meaning of a metric like sizeOfLogQueue when
> > repl
Hi all,
Knowing that replication metrics are global at the region server level in
HBase 0.94.13, what is the meaning of a metric like sizeOfLogQueue when
replicating to more than one peer/slave? Is it the queue size reported by
the "last" replication source thread ? does the last thread win ? Can
> On Tue, Aug 20, 2013 at 8:56 AM, Pablo Medina >wrote:
>
> > Hi all,
> >
> > I'm using custom filters to retrieve filtered data from HBase using the
> > native api. I noticed that the class full names of those custom filters
> is
> > being sent as the
Hi all,
I'm using custom filters to retrieve filtered data from HBase using the
native api. I noticed that the class full names of those custom filters is
being sent as the bytes representation of the string using
Text.writeString(). This consumes a lot of network bandwidth in my case due
to using
Hi all,
What is the recommended strategy/configuration regarding connection pooling
in production? I have read the HBase definitive guide section about it and
some old threads in the mailing list which suggests to use an HTablePool
sharing a unique HConnection. I'm wondering if this can be a bottl
Lars,
when you say 'when one memstore needs to be flushed all other column
families are flushed', are you referring to other column families of the
same table, right?
2013/8/4 Rohit Kelkar
> Regarding slow scan- only fetch the columns /qualifiers that you need. It
> may be that you are fetch
to 4 bytes and prepending it to his key. This should give
> him some randomness.
>
>
>
> On Jul 31, 2013, at 1:57 PM, Pablo Medina wrote:
>
> > If you split that one hot region and then move a half to another region
> > server then you will move the half of the loa
on key design
> >>
> >> Thanks for the responses!
> >>
> >>> why don't you use a scan
> >> I'll try that and compare it.
> >>
> >>> How much memory do you have for your region servers? Have you enabled
> >>> bloc
x27;ll try that and compare it.
> >
> > > How much memory do you have for your region servers? Have you enabled
> > > block caching? Is your CPU spiking on your region servers?
> > Block caching is enabled. Cpu and memory dont seem to be a problem.
> >
> >
The scan can be an option if the cost of scanning undesired cells and
discarding them trough filters is better than accessing those keys
individually. I would say that as the number of 'undesired' cells decreases
the scan overall performance/efficiency gets increased. It all depends on
how the keys
I've just created a Jira to discuss this issue:
https://issues.apache.org/jira/browse/HBASE-9087
Thanks!
2013/7/30 Elliott Clark
> On Mon, Jul 29, 2013 at 11:08 PM, lars hofhansl wrote:
> > Do you think we should change it to use a ConcurrentHashMap
>
> Yea, I think that would be great. I re
I got that stack trace using jstack
Thanks,
Pablo.
2013/7/30 hua beatls
> how to get the 'stack trace'?
>
> Thanks!
>
>
> beatls
>
>
> On Tue, Jul 30, 2013 at 11:20 AM, Pablo Medina >wrote:
>
> > Hi all,
> >
> > I'm ha
gt; What do you see in the logs around the time you saw that behavior? is
> this happening on a single Region Server? what version of HBase are
> you running?
>
> cheers,
> Esteban.
>
> Cloudera, Inc.
>
> On Jul 29, 2013, at 20:21, Pablo Medina wrote:
>
> > Hi
Hi all,
I'm having a lot of handlers (90 - 300 aprox) being blocked when reading
rows. They are blocked during changedReaderObserver registration.
Does anybody else run into the same issue?
Stack trace:
"IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000
nid=0x2244 waiting on
Hi all,
I'm having a lot of handlers (90 - 300 aprox) being blocked when reading
rows. They are blocked during changedReaderObserver registration.
Does anybody else run into the same issue?
Stack trace:
"IPC Server handler 99 on 60020" daemon prio=10 tid=0x41c84000
nid=0x2244 waiting on
16 matches
Mail list logo