Hi Jose, Well it depends on the total size of your Zpool and how often these files are changed. I was at a customer an huge internet provider, who had 40x an X4500 with Standard solaris and using ZFS. All the machines were equiped with 48x 1TB disks. The machines were used to provide the email platform, so all the user email accounts were on the system. This did mean also millions of files in one ZPOOL. What they noticed on the the X4500 systems, that when the zpool became filled up for about 50-60% the performance of the system did drop enormously. They do claim this has to do with the fragmentation of the ZFS filesystem. So we did try over there putting an S7410 system in with about the same config on disks, 44x 1TB SATA BUT 4x 18GB WriteZilla (in a stripe) we were able to get much and much more i/o's from the system the the comparable X4500, however they did put it in production for a couple of weeks, and as soon as the ZFS filesystem did come in the range of about 50-60% filling the did see the same problem. The performance did drop down enormously. Netapps has the same problem with there Waffle filesystem, (they also tested this) however they do provide an Defragmentation tool for this. This is also NOT a nice solution, because you have to run this, manually or scheduled and it is taking a lot of system resources but it helps. I did hear Sun is denying we do have this problem in ZFS, and therefore we don't need a kind of defragmentation mechanism, however our customer experiences are different............ May be it is good for the ZFS group to look at this (potential) problem. The customer i am talking about is willing to share there experiences with Sun engineering. greetings, Cor Beumer Jose Martins wrote:
--
|
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss