Hi,
We’ve been running an all-SSD Ceph cluster for a few months now and generally
are very happy with it.
However, we’ve noticed that if we create a snapshot of an RBD device, then
writing to the RBD goes massively slower than before we took the snapshot.
Similarly, we get poor performance if
Hi Haomai,
Thanks for that suggestion. To test it out, I have:
1. upgraded to 3.19 kernel
2. added filestore_fiemap = true to my ceph.conf in the [osd] section
3. wiped and rebuild the ceph cluster
4. recreated the RBD volume
But I am still only getting around 120 IOPS after a snapshot. The lo
> On 19/11/2015, at 23:36 , Haomai Wang wrote:
> Hmm, what's the actual capacity usage in this volume? Fiemap could
> help a lot to a normal workload volume like sparse data distribution.
I’m using basically the whole volume, so it’s not really sparse.
>
> Hmm, it's really a strange result for