On Fri, 18 Sep 2009 12:48:34 -0400
Richard Elling <richard.ell...@gmail.com> wrote:

> The transactional nature of ZFS may work against you here.
> Until the data is committed to disk, it is unclear how much space
> it will consume. Compression clouds the crystal ball further.

...but not impossible. I'm just looking for a reasonable upper bound.
For example, if I always rounded up to the next 128k mark, and added an
additional 128k, that would always give me an upper bound (for files <=
1M), as far as I can tell. But that is not a very tight bound; can you
suggest anything better?

> > I'd also _like_ not to require a dedicated dataset for it, but
> > it's not
> > like it's difficult for users to create one.
> 
> Use delegation.  Users can create their own datasets, set parameters,
> etc. For this case, you could consider changing recordsize, if you
> really are so worried about 1k. IMHO, it is easier and less expensive
> in process and pain to just buy more disk when needed.

Users of OpenAFS, not "unprivileged users". All users I am talking about
are the administrators for their machines. I would just like to reduce
the number of filesystem-specific steps needed to be taken to set up the
cache. You don't need to do anything special for a tmpfs cache, for
instance, or ext2/3 caches on linux.

-- 
Andrew Deason
adea...@sinenomine.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to