On Wed, 10 Jan 2007, Mark Maybee wrote:

> Jason J. W. Williams wrote:
> > Hi Robert,
> >
> > Thank you! Holy mackerel! That's a lot of memory. With that type of a
> > calculation my 4GB arc_max setting is still in the danger zone on a
> > Thumper. I wonder if any of the ZFS developers could shed some light
> > on the calculation?
> >
> In a worst-case scenario, Robert's calculations are accurate to a
> certain degree:  If you have 1GB of dnode_phys data in your arc cache
> (that would be about 1,200,000 files referenced), then this will result
> in another 3GB of "related" data held in memory: vnodes/znodes/
> dnodes/etc.  This related data is the in-core data associated with
> an accessed file.  Its not quite true that this data is not evictable,
> it *is* evictable, but the space is returned from these kmem caches
> only after the arc has cleared its blocks and triggered the "free" of
> the related data structures (and even then, the kernel will need to
> to a kmem_reap to reclaim the memory from the caches).  The
> fragmentation that Robert mentions is an issue because, if we don't
> free everything, the kmem_reap may not be able to reclaim all the
> memory from these caches, as they are allocated in "slabs".
>
> We are in the process of trying to improve this situation.
.... snip .....

Understood (and many Thanks).  In the meantime, is there a rule-of-thumb
that you could share that would allow mere humans (like me) to calculate
the best values of zfs:zfs_arc_max and ncsize, given the that machine has
nGb of RAM and is used in the following broad workload scenarios:

a) a busy NFS server
b) a general multiuser development server
c) a database server
d) an Apache/Tomcat/FTP server
e) a single user Gnome desktop running U3 with home dirs on a ZFS
filesystem

It would seem, from reading between the lines of previous emails,
particularly the ones you've (Mark M) written, that there is a rule of
thumb that would apply given a standard or modified ncsize tunable??

I'm primarily interested in a calculation that would allow settings that
would reduce the possibility of the machine descending into "swap hell".

PS: Interesting is that no one has mentioned (the tunable) maxpgio.  I've
often found that increasing maxpgio is the only way to improve the odds of
a machine remaining usable when lots of swapping is taking place.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
           Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
             OpenSolaris Governing Board (OGB) Member - Feb 2006
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to