> So it seems a big number of inodes considering that we are using just a 15% 
> of the total capacity. A different thing here is that we are using 512K of 
> inode size and we have a big amount of memory .
> Also we always have one of our disks close to 100% of util, and this is 
> caused by the object-auditor that scans all our disks continuously.

>So we was also thinking in the possibility to change the kind of disks that we 
>are using, to use smaller and faster disks.
>Will be really util to know what kind of disks are you using in your old and 
>new storage nodes, and compare that with our case.

Max,

Old config:
* 6 X 3.5 inch 2TB (I think 5400 RPM)
* 10GB flashcache per disk, on 1 SSD
* 48GB Mem

moving to:
* 10 X 1 TB 2.5 inch disk (7200RPM)
* 10GB flashcache per disk, split over 2 x SSD
* 256 GB Mem

We are considering changing to 128GB memory for the new nodes:
Strangely the 48GB is fully used but when we look at the 256GB machines there 
is just about 20GB used for Inodes and 55GB used in total.
The vm.vfs_cache_pressure is set to 1 but you can't really set it to 0 because 
it will run out of memory (if not immediately it will do it after a re-balance 
in Swift).
Maybe the machines are just not under enough load yet to make it cache more 
Inodes but currently it does not look like 256GB is needed.

Concerning the object-auditor:
We run a cronjob on our machines that will do a ionice on the swift processes:
Preference is given to the object-server and replicator/auditor have a lower 
prio.
Those processes will still max-out a disk but hopefully it should not hurt 
regular performance to much.

Let me know if you run a SWIFT/COSbench against your setup.
I'm curious what number of puts you are able to process currently.

Cheers,
Robert
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to