On 29/01/2020 10:24, Samy Ascha wrote:
> I've been running CephFS for a while now and ever since setting it up, I've 
> seen unexpectedly large write i/o on the CephFS metadata pool.
>
> The filesystem is otherwise stable and I'm seeing no usage issues.
>
> I'm in a read-intensive environment, from the clients' perspective and 
> throughput for the metadata pool is consistently larger than that of the data 
> pool.
>
> [...]
>
> This might be a somewhat broad question and shallow description, so yeah, let 
> me know if there's anything you would like more details on.

No explanation, but chiming in, as I've seen something similar happen on
my single node "cluster" at home, where I'm exposing a cephfs through
Samba using vfs_ceph, mostly for time machine backups. Running ceph
14.2.6 on debian buster.

I can easily perform debugging operations there, no SLA in place :)

Jasper

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to