[ceph-users] RBD snapshots cause disproportionate performance degradation

2015-11-17 Thread Will Bryant
Hi, We’ve been running an all-SSD Ceph cluster for a few months now and generally are very happy with it. However, we’ve noticed that if we create a snapshot of an RBD device, then writing to the RBD goes massively slower than before we took the snapshot. Similarly, we get poor performance if

Re: [ceph-users] RBD snapshots cause disproportionate performance degradation

2015-11-18 Thread Will Bryant
Hi Haomai, Thanks for that suggestion. To test it out, I have: 1. upgraded to 3.19 kernel 2. added filestore_fiemap = true to my ceph.conf in the [osd] section 3. wiped and rebuild the ceph cluster 4. recreated the RBD volume But I am still only getting around 120 IOPS after a snapshot. The lo

Re: [ceph-users] RBD snapshots cause disproportionate performance degradation

2015-11-19 Thread Will Bryant
> On 19/11/2015, at 23:36 , Haomai Wang wrote: > Hmm, what's the actual capacity usage in this volume? Fiemap could > help a lot to a normal workload volume like sparse data distribution. I’m using basically the whole volume, so it’s not really sparse. > > Hmm, it's really a strange result for