Hi
These are a characteristic time Series data. You must prefix rowkey TO
avoid workload TO only one regione server.
_timestamp.
Il 11/dic/2013 00:35 "Steven Wu" ha scritto:
>
>
>
>
> Hi
>
>I am very new to Hbase, still self-learning and do POC for our current
> project. I have a question ab
1. Scanner caching has nothing to do with caching. It sets the number of rows
per one RPC call which server transfers to client.
Usually, it helps to improve scanner performance. It used to be 1, by
default, therefore one must set it to something large than 1
to achieve good scanner performanc
Should we default it to 0 (off) then? (As Ted pointed out, you can turn this
off).
Are you not worried about MTTR when a RegionServer dies with 3GB of logs to
replay?
-- Lars
From: Adrien Mogenet
To: user
Sent: Tuesday, December 10, 2013 2:16 PM
Subject:
100 writes/updates per min is very low number and HBase, of course, is able to
sustain 1.5 update/sec (if not GBs per update)
1000 concurrent users and minimum query latency - probably possible but we do
not have enough info:
What is SLA? requests per sec and latency requirements? How large is t
Hi All,
I'm trying to understand how different configuration will affect
performance for my use cases. My table has the following the following
schema. I'm storing event logs in a single column family. The row key is in
the format [company][timestamp][uuid].
My access pattern is fairly simple. Ev
Hi
I am very new to Hbase, still self-learning and do POC for our current
project. I have a question about the row key design.
I have created big table (called asset table), it has more than 50M
records. Each asset has unique key (let's call it asset_key)
This table receives continuo
I do agree that flush interval must be configurable (I think its configurable).
> I've upgraded to 0.94.11. Here is my "worst-case scenario" :
> - let say each regionserver has 3 GB memstore
> - let say compaction max filesize is ~200 GB, min. 2 files, max 10 files.
> - let say memstore is growing
>From hbase-default.xml :
hbase.regionserver.optionalcacheflushinterval
360
Maximum amount of time an edit lives in memory before being
automatically flushed.
Default 1 hour. Set it to 0 to disable automatic flushing.
Can you adjust the above parameter to fit your workload
Hi guys,
I've upgraded to 0.94.11. Here is my "worst-case scenario" :
- let say each regionserver has 3 GB memstore
- let say compaction max filesize is ~200 GB, min. 2 files, max 10 files.
- let say memstore is growing "slowly" (1 GB / hour per RS)
Then, automatically flushing every hour will l
Nope, since i've increase replication handlers from its default, it's
reporting with a ceiling of 1, before that a few days ago reported 15.
Why are RPC queued in spite of having all handlers WAITING?
2013/12/10 Andrew Purtell
> Aside from other issues being explored on this thread, "REPL IPC
Just to close the loop, the previous recommended steps help to get us back
up, but one of the HMasters is not happy now. I will update with a final
analysis shortly.
On Tue, Dec 10, 2013 at 1:10 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Also, might be interesting to look in th
Ok. So I will say there is nothing to worry about. There is a millisecond
spent from time to time on a call. It's sound to be the time it takes to go
to the queue and get picked-up. So it stayed on the queue 1ms until an
handler pick it up. And that's why you see you metric growing very slowly.
Do
Aside from other issues being explored on this thread, "REPL IPC Server
handler N on PORT WAITING Waiting for a call (since 22 hrs, 57mins, 38sec
ago)" looks to me like ReplicationSource setting up IPC with INT_MAX for a
timeout.
On Thu, Dec 5, 2013 at 8:49 PM, Federico Gaule wrote:
> Hi,
>
> I
Followed up on the issue.
On Tue, Dec 10, 2013 at 6:13 PM, tsuna wrote:
> I filed an issue (https://issues.apache.org/jira/browse/HBASE-10119)
> and attached a proposition for a fix, which simply consists in calling
> stop() when we forcefully remove a coprocessor.
>
> On Mon, Dec 9, 2013 at 10
When activating DEBUG level for org.apache.hadoop.hbase.ipc.WritableRpcEngine
didin't get any line on my log. Checking WritableRpcEngine code, looks like
*org.apache.hadoop.ipc.HBaseServer.trace *should be the right one.
Here we have some lines. In case you need/want more, just tell:
2013-12-10 13
Also, might be interesting to look in the RS logs to see what this region
can not come back online...
JM
2013/12/10 Kevin O'dell
> Hey Raheem,
>
> You can sideline the table into tmp(mv /hbase/table /tmp/table, then
> bring HBase back online. Once HBase is back you can use HBCK to repair
>
Can you activate debug loglevel on
org.apache.hadoop.hbase.ipc.WritableRpcEngine and look at something looking
like
Call #; Served:# queueTime=
We want to see what you have for .. This is what is added to your
RpcQueueTime.
Thanks,
JM
I thought we could, somehow, check queue items. Replication Handlers status
are waiting most of the time:
Here's a sample of 1 minute, refreshing every second. Allways got same
output while RpcQueueTime_avg_time was reporting activity (Handlers 29...0
were all WAITING).:
hbase@pam-hb-replica-b-00:
Hey Raheem,
You can sideline the table into tmp(mv /hbase/table /tmp/table, then
bring HBase back online. Once HBase is back you can use HBCK to repair
your META -fixMeta -fixAssignments. Once HBase is consistent again, you
can move the table back out of tmp and use HBCK to reupdate META. If
I have a distributed Hbase cluster that will not start. It looks like there is
a table that is an inconsistent state:
2013-12-10 07:40:50,447 FATAL org.apache.hadoop.hbase.master.HMaster:
Unexpected state :
ct_claims,204845|81V6SO4EF56DD1TKOIU7AS4L5D,1386050670937.6d138b97cde8bc3e49ff3463991310
You can do something like (types it in the email, did not tested it):
for i in {1..240}; do wget "
http://MASTER_IP:6001/master-status?format=json&filter=handler";; done
That will create a 240 JSon files that you can look at to see handlers
status. Also, you this same page, you should see you 30
I'm using hadoop-1.2.1 with hbase-0.94.12. The problem is random, because
sometime it presents when I inserts the 50th rows, sometimes at
90th. I think too it could be caused because while hbase split the
region, ycsb try to insert the data into the previuos split. But there's not
a met
There is any coprocessor on slaves, neither master.
How can i dump RPC queues?
Thanks!
2013/12/10 Jean-Marc Spaggiari
> Here are the properties from the code regarding the handlers.
> hbase.master.handler.count
> hbase.regionserver.handler.count
> hbase.regionserver.replication.handler.count
>
Can that be because ycsb try to get the .META. table in a version where
this table got renamed because of namespace?
2013/12/10 Kevin O'dell
> Hey Andrea,
>
> That is very weird, I hit this similar extremely strange one off issue
> that I could never reproduce. After dropping a table doing X
Hey Andrea,
That is very weird, I hit this similar extremely strange one off issue
that I could never reproduce. After dropping a table doing X amount of
inserts, I would receive the same error and my META table would be
corrupted. What I did to correct this issue was to sideline the META tabl
Here are the properties from the code regarding the handlers.
hbase.master.handler.count
hbase.regionserver.handler.count
hbase.regionserver.replication.handler.count
hbase.regionserver.metahandler.count
Do you have any coprocessor configured on your slave cluster? Ar eyou able
to dump the RPC que
Hi Andrea,
Which version of HBase are you testing with?
JM
2013/12/10 Andrea
> Hi, I encounter this problem:
>
> 13/12/10 13:25:15 WARN client.HConnectionManager$HConnectionImplementation:
> Encountered problems when prefetch META table:
> java.io.IOException: HRegionInfo was null or empty in
Hi, I encounter this problem:
13/12/10 13:25:15 WARN client.HConnectionManager$HConnectionImplementation:
Encountered problems when prefetch META table:
java.io.IOException: HRegionInfo was null or empty in Meta for usertable,
row=usertable,user4300292263938371081,99
at
org
I'm using hbase 0.94.13.
hbase.regionserver.metahandler.count is a more intuituve name for those
handlers :)
2013/12/10 Nicolas Liochon
> It's hbase.regionserver.metahandler.count. Not sure it causes the issue
> you're facing, thought. What's your HBase version?
>
>
> On Tue, Dec 10, 2013 at 1:
It was written to be generic, but limited to 'put' to maintain the backward
compatibility.
Some 'Row' do not implement 'heapSize', so we have a limitation for some
types for the moment (we need the objects to implement heapSize as we need
to know how when it's time to flush the buffer). This can be
It's hbase.regionserver.metahandler.count. Not sure it causes the issue
you're facing, thought. What's your HBase version?
On Tue, Dec 10, 2013 at 1:21 PM, Federico Gaule wrote:
> There is another set of handler we haven't customized "PRI IPC" (priority
> ?). What are those handlers used for? W
There is another set of handler we haven't customized "PRI IPC" (priority
?). What are those handlers used for? What is the property used to increase
the number of handlers? hbase.regionserver.custom.priority.handler.count ?
Thanks!
2013/12/10 Federico Gaule
> I've increased hbase.regionserver
I've increased hbase.regionserver.replication.handler.count 10x (30) but
nothing have changed. rpc.metrics.RpcQueueTime_avg_time still shows
activity :(
Mon Dec 09 14:04:10 EST 2013REPL IPC Server handler 29 on 6WAITING
(since 16hrs, 58mins, 56sec ago)Waiting for a call (since 16hrs, 58mins,
5
Thanks for confirming the issue.So, to overcome this issue we need to move
from 0.94.10 to either 0.94.12 or above.
Thanks
-OC
On Tue, Dec 10, 2013 at 4:09 PM, Matteo Bertozzi wrote:
> Hi, thank you for the follow up.
>
> The problem is related to a bug in the name resolution of a "clone" (tha
Hi, thank you for the follow up.
The problem is related to a bug in the name resolution of a "clone" (that
in your case the is the restore of a snapshot of a restored table).
This problem was fixed as part of HBASE-8760, which should be integrated in
0.94.12
Matteo
On Tue, Dec 10, 2013 at 10
I filed an issue (https://issues.apache.org/jira/browse/HBASE-10119)
and attached a proposition for a fix, which simply consists in calling
stop() when we forcefully remove a coprocessor.
On Mon, Dec 9, 2013 at 10:59 PM, lars hofhansl wrote:
> You can also try to place your "global" variables in
Hi,
Thanks for your help.
More details are added below.Please let us know if any additional logs
required.
We have only one cluster (cluster-1) with 1 NN and with 4 DNs and HBase
version is 0.94.10
We are using hbase shell to Create, Export snapshots.
In short, what we are trying to do is:
-C
37 matches
Mail list logo