Hi Dominic,

I should have mentioned that I've set noatime already.

I have not found any obvious other mount options that would contribute to 
'write on read' behaviour..

Thx

Samy

> On 29 Jan 2020, at 15:43, dhils...@performair.com wrote:
> 
> Sammy;
> 
> I had a thought; since you say the FS has high read activity, but you're 
> seeing large write I/O... is it possible that this is related to  atime 
> (Linux last access time)?  If I remember my Linux FS basics, atime is stored 
> in the file entry for the file in the directory, and I believe directory 
> information is stored in the metadata pool (dentries?).
> 
> As a test; you might try mounting the CephFS with the noatime flag.  Then see 
> if the write I/O is reduced.
> 
> I honestly don't know if CephFS supports atime, but I would expect it would.
> 
> Thank you,
> 
> Dominic L. Hilsbos, MBA 
> Director - Information Technology 
> Perform Air International Inc.
> dhils...@performair.com 
> www.PerformAir.com
> 
> 
> 
> -----Original Message-----
> From: Samy Ascha [mailto:s...@xel.nl] 
> Sent: Wednesday, January 29, 2020 2:25 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Write i/o in CephFS metadata pool
> 
> Hi!
> 
> I've been running CephFS for a while now and ever since setting it up, I've 
> seen unexpectedly large write i/o on the CephFS metadata pool.
> 
> The filesystem is otherwise stable and I'm seeing no usage issues.
> 
> I'm in a read-intensive environment, from the clients' perspective and 
> throughput for the metadata pool is consistently larger than that of the data 
> pool.
> 
> For example:
> 
> # ceph osd pool stats
> pool cephfs_data id 1
> client io 7.6 MiB/s rd, 19 KiB/s wr, 404 op/s rd, 1 op/s wr
> 
> pool cephfs_metadata id 2
> client io 338 KiB/s rd, 43 MiB/s wr, 84 op/s rd, 26 op/s wr
> 
> I realise, of course, that this is a momentary display of statistics, but I 
> see this unbalanced r/w activity consistently when monitoring it live.
> 
> I would like some insight into what may be causing this large imbalance in 
> r/w, especially since I'm in a read-intensive (web hosting) environment.
> 
> Some of it may be expected in when considering details of my environment and 
> CephFS implementation specifics, so please ask away if more details are 
> needed.
> 
> With my experience using NFS, I would start by looking at client io stats, 
> like `nfsstat` and tuning e.g. mount options, but I haven't been able to find 
> such statistics for CephFS clients.
> 
> Is there anything of the sort for CephFS? Are similar stats obtainable in 
> some other way?
> 
> This might be a somewhat broad question and shallow description, so yeah, let 
> me know if there's anything you would like more details on.
> 
> Thanks a lot,
> Samy
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to