On Sun, Jun 05, 2011 at 01:26:20PM -0500, Tim Cook wrote:
> I'd go with the option of allowing both a weighted and a forced option.  I
> agree though, if you do primarycache=metadata, the system should still
> attempt to cache userdata if there is additional space remaining.

I think I disagree.  Remember that this is a per-dataset
attribute/option.  One of the reasons to set it on a particular
dataset is precisely to leave room in the cache for other datasets,
because I know something about the access pattern, desired service
level, or underlying storage capability. 

For example, for a pool on SSD, I will set secondarycache=none (since
l2arc offers no benefit, only cost in overhead and ssd wear).  I may
also set primarycache=<something less than data> since a data miss is
still pretty fast, and I will get more value using my l1/l2 cache
resources for other datasets on slower media.

This is starting to point out that these tunables are a blunt
instrument.  Perhaps what may be useful is some kind of service-level
priority attribute (default 0, values +/- small ints).  This could be
used in a number of places, including when deciding which of two
otherwise-equal pages to evict/demote in cache.

That's effectively what happens anyway since the blocks do go into arc
while in use, they're just freed immediately after.

--
Dan.

Attachment: pgp8e70DFvopH.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to