On Wed, 16 Apr 2014 17:08:09 +0200 Dan van der Ster wrote:

> Dear ceph-users,
> 
> I've recently started looking through our FileStore logs to better 
> understand the VM/RBD IO patterns, and noticed something interesting. 
> Here is a snapshot of the write lengths for one OSD server (with 24 
> OSDs) -- I've listed the top 10 write lengths ordered by number of 
> writes in one day:
> 
> Writes per length:
> 4096: 2011442
> 8192: 438259
> 4194304: 207293
> 12288: 175848
> 16384: 148274
> 20480: 69050
> 24576: 58961
> 32768: 54771
> 28672: 43627
> 65536: 34208
> 49152: 31547
> 40960: 28075
> 
> There were ~4000000 writes to that server on that day, so you see that 
> ~50% of the writes were 4096 bytes, and then the distribution drops off 
> sharply before a peak again at 4MB (the object size, i.e. the max write 
> size). (For those interested, read lengths are below in the P.S.)
> 
> I'm trying to understand that distribution, and the best explanation 
> I've come up with is that these are ext4/xfs metadata updates, probably 
> atime updates. Based on that theory, I'm going to test noatime on a few 
> VMs and see if I notice a change in the distribution.
> 
That strikes me as odd, as since kernel 2.6.30 the default option for
mounts is relatime, which should have an effect quite close to that of a
strict noatime.

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to