Hi Stefan,

September 4 2014 9:13 PM, "Stefan Priebe" <s.pri...@profihost.ag> wrote: 
> Hi Dan, hi Robert,
> 
> Am 04.09.2014 21:09, schrieb Dan van der Ster:
> 
>> Thanks again for all of your input. I agree with your assessment -- in
>> our cluster we avg <3ms for a random (hot) 4k read already, but > 40ms
>> for a 4k write. That's why we're adding the SSDs -- you just can't run a
>> proportioned RBD service without them.
> 
> How did you measure these latencies?

average latency from rados bench -p test 10 write -b 4096 --no-cleanup, then a 
rados bench seq immediately afterwards, every 10 minutes for the past few 
months. In my experience rados bench is not far off from what a user sees with 
ioping inside a VM.

40ms is not unusable... users don't complain (much). But when we lose a whole 
server (like what happened last week), then the write latency can exceed 
100-200ms.

How does the latency change in your cluster when you have a large backfilling 
event?

> 
>> I'll definitely give bcache a try in my test setup, but more reading has
>> kinda tempered my expectations -- the rate of oopses and hangs on the
>> bcache ML seems a bit high. And a 3.14 kernel would indeed still be a
>> challenge on our RHEL6 boxes.
> 
> bcache works fine with 3.10 and a bunch of patches ;-) Not sure if you
> can upgrade to RHEL7 and also not sure if RHEL has already some of them
> ready.
> 

IIRC bcache is disabled in the RHEL7 kconfig -- so there's at least one patch 
it would need.
My colleague is using elrepo -- but this would probably void our support 
contract. 
Sure would be nice if RHEL would add support, but I think they prefer dmcache 
at the moment.

> We're using bcache on one of our bcache clusters since more than year

Do you mirror the bcache devices, or just let the OSDs fail when an SSD fails? 
Can you mention the exact bcache config you've settled on?

> based on kernel 3.10 + 15 patches and i never saw a crash or hang since
> applying them ;-) But yes with a vanilla kernel it's not that stable.
> 

Maybe we want to take this off list, but I would be curious to know which 15 
patches those were :)

Cheers, Dan
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to