Hi,

We experience something similar with our Openstack Swift setup.
You can change the sysstl "vm.vfs_cache_pressure" to make sure more inodes are 
being kept in cache.
(Do not set this to 0 because you will trigger the OOM killer at some point ;)

We also decided to go for nodes with more memory and smaller disks.
You can read about our experiences here:
http://engineering.spilgames.com/openstack-swift-lots-small-files/

Cheers,
Robert

> From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] 
> on behalf of Guang Yang [yguan...@yahoo.com]
>Hello all,
> Recently I am working on Ceph performance analysis on our cluster, our OSD 
> hardware looks like:
> 11 SATA disks, 4TB for each, 7200RPM
> 48GB RAM
>
> When break down the latency, we found that half of the latency (average 
> latency is around 60 milliseconds via radosgw) comes from file lookup and open
> (there could be a couple of disk seeks there). When looking at the file 
> system  cache (slabtop), we found
> that around 5M dentry / inodes are cached, however, the host has around 110 
> million files (and directories) in total.
>
> I am wondering if there is any good experience within community tunning for 
> the same workload, e.g. change the in ode size ? use mkfs.xfs -n size=64k 
> option[1] ?

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to