On 24-Feb-10, at 3:38 PM, Tomas Ögren wrote:
On 24 February, 2010 - Bob Friesenhahn sent me these 1,0K bytes:
On Wed, 24 Feb 2010, Steve wrote:
The overhead I was thinking of was more in the pointer structures...
(bearing in mind this is a 128 bit file system), I would guess that
memory requirements would be HUGE for all these files...otherwise
arc
is gonna struggle, and paging system is going mental....?
It is not reasonable to assume that zfs has to retain everything in
memory.
I have a directory here containing a million files and it has not
caused
any strain for zfs at all although it can cause considerable
stress on
applications.
400 million tiny files is quite a lot and I would hate to use
anything
but mirrors with so many tiny files.
Another tought is "am I using the correct storage model for this
data"?
You're not the only one wondering that. :)
--Toby
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss