how is your hbase performance on ceph compared to hdfs  - was there some
special knobs that you needed to turn ?

I'm running a few hbase tests with the yahoo cloud serving benchmark -
ycsb  with ceph  & hdfs and the results were very surprising considering
that the hadoop + ceph results were not all that bad.  For the test setup,
I have about 50 million rows distributed over two nodes. Ran a single ycsb
client  with 128 threads reading all that doing 100% reads.

the hbase+hdfs throughput results were 38x better. Looking at the system
stats I don't see anything special going on - no significant cpu
utilization. Practically no disk reads which means its all cached
somewhere. The 10gig network is no where close to saturating. Plenty of
memory also.

Any thoughts on what might be going on ?


For hbase + hdfs:
-------------------------
[OVERALL], RunTime(ms), 720023.0
[OVERALL], Throughput(ops/sec), 16373.007528926159
[READ], Operations, 11788942
[READ], AverageLatency(us), 7814.152212217178
[READ], MinLatency(us), 166
[READ], MaxLatency(us), 596026
[READ], 95thPercentileLatency(ms), 19
[READ], 99thPercentileLatency(ms), 23
[READ], Return=0, 11788942

For hbase + ceph:
--------------------------
[OVERALL], RunTime(ms), 720681.0
[OVERALL], Throughput(ops/sec), 420.35102909609105
[READ], Operations, 302939
[READ], AverageLatency(us), 304294.1931676014
[READ], MinLatency(us), 312
[READ], MaxLatency(us), 1539757
[READ], 95thPercentileLatency(ms), 676
[READ], 99thPercentileLatency(ms), 931
[READ], Return=0, 302939

Thx



On Wed, Jul 17, 2013 at 6:27 PM, Mike Bryant <mike.bry...@ocado.com> wrote:

> Yup, that was me.
> We have hbase working here.
> You'll want to disable localized reads, as per bug #5388. That bug
> will cause your regionservers to crash fairly often when doing
> compaction.
> You'll also want to restart each of the regionservers and masters
> often (We're doing it once a day) to mitigate the effects of bug
> #5039, being that your data pool is growing much faster than you might
> expect, and it being much larger than the visible filesize in cephfs.
>
> With those workarounds in place we're running a stable install of
> openTSDB on top of hbase.
>
> Mike
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to