Hello all,
Recently I am working on Ceph performance analysis on our cluster, our OSD
hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM
When break down the latency, we found that half of the latency (average latency
is around 60 milliseconds via radosgw) comes from file lookup and open (there
could be a couple of disk seeks there). When looking at the file system cache
(slabtop), we found that around 5M dentry / inodes are cached, however, the
host has around 110 million files (and directories) in total.
I am wondering if there is any good experience within community tunning for the
same workload, e.g. change the in ode size ? use mkfs.xfs -n size=64k option[1]
?
[1]
http://xfs.org/index.php/XFS_FAQ#Q:_Performance:_mkfs.xfs_-n_size.3D64k_option
Thanks,
Guang
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com