Hi!

I've been running CephFS for a while now and ever since setting it up, I've 
seen unexpectedly large write i/o on the CephFS metadata pool.

The filesystem is otherwise stable and I'm seeing no usage issues.

I'm in a read-intensive environment, from the clients' perspective and 
throughput for the metadata pool is consistently larger than that of the data 
pool.

For example:

# ceph osd pool stats
pool cephfs_data id 1
  client io 7.6 MiB/s rd, 19 KiB/s wr, 404 op/s rd, 1 op/s wr

pool cephfs_metadata id 2
  client io 338 KiB/s rd, 43 MiB/s wr, 84 op/s rd, 26 op/s wr

I realise, of course, that this is a momentary display of statistics, but I see 
this unbalanced r/w activity consistently when monitoring it live.

I would like some insight into what may be causing this large imbalance in r/w, 
especially since I'm in a read-intensive (web hosting) environment.

Some of it may be expected in when considering details of my environment and 
CephFS implementation specifics, so please ask away if more details are needed.

With my experience using NFS, I would start by looking at client io stats, like 
`nfsstat` and tuning e.g. mount options, but I haven't been able to find such 
statistics for CephFS clients.

Is there anything of the sort for CephFS? Are similar stats obtainable in some 
other way?

This might be a somewhat broad question and shallow description, so yeah, let 
me know if there's anything you would like more details on.

Thanks a lot,
Samy
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to