Cor Beumer - Storage Solution Architect wrote:
Hi Jose,

Well it depends on the total size of your Zpool and how often these files are changed.

...and the average size of the files. For small files, it is likely that the default
recordsize will not be optimal, for several reasons.  Are these small files?
-- richard


I was at a customer an huge internet provider, who had 40x an X4500 with Standard solaris and using ZFS. All the machines were equiped with 48x 1TB disks. The machines were used to provide the email platform, so all the user email accounts were on the system. This did mean also millions of files in one ZPOOL.

What they noticed on the the X4500 systems, that when the zpool became filled up for about 50-60% the performance of the system
did drop enormously.
They do claim this has to do with the fragmentation of the ZFS filesystem. So we did try over there putting an S7410 system in with about the same config on disks, 44x 1TB SATA BUT 4x 18GB WriteZilla (in a stripe) we were able to get much and much more i/o's from the system the the comparable X4500, however they did put it in production for a couple of weeks, and as soon as the ZFS filesystem did come in the range of about 50-60% filling the did see the same problem. The performance did drop down enormously. Netapps has the same problem with there Waffle filesystem, (they also tested this) however they do provide an Defragmentation tool for this. This is also NOT a nice solution, because you have to run this, manually or scheduled and it is taking a lot of system resources but it helps.

I did hear Sun is denying we do have this problem in ZFS, and therefore we don't need a kind of defragmentation mechanism,
however our customer experiences are different............

May be it is good for the ZFS group to look at this (potential) problem.

The customer i am talking about is willing to share there experiences with Sun engineering.

greetings,

Cor Beumer


Jose Martins wrote:

Hello experts,

IHAC that wants to put more than 250 Million files on a single
mountpoint (in a directory tree with no more than 100 files on each
directory).

He wants to share such filesystem by NFS and mount it through
many Linux Debian clients

We are proposing a 7410 Openstore appliance...

He is claiming that certain operations like find, even if taken from
the Linux clients on such NFS mountpoint take significant more
time than if such NFS share was provided by other NAS providers
like NetApp...

Can someone confirm if this is really a problem for ZFS filesystems?...

Is there any way to tune it?...

We thank any input

Best regards

Jose




--
<http://www.sun.com>        *Cor Beumer *
  Data Management & Storage

  *Sun Microsystems Nederland BV*
  Saturnus 1
  3824 ME Amersfoort The Netherlands
  Phone +31 33 451 5172
  Mobile +31 6 51 603 142
  Email cor.beu...@sun.com

------------------------------------------------------------------------

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to