On Sep 21, 2009, at 7:11 AM, Andrew Deason wrote:
On Sun, 20 Sep 2009 20:31:57 -0400
Richard Elling <richard.ell...@gmail.com> wrote:
If you are just building a cache, why not just make a file system and
put a reservation on it? Turn off auto snapshots and set other
features as per best practices for your workload? In other words,
treat it like we
treat dump space.
I think that we are getting caught up in trying to answer the
question
you ask rather than solving the problem you have... perhaps because
we don't understand the problem.
Yes, possibly... some of these suggestions dont quite make a lot of
sense to me. We can't just make a filesystem and put a reservation on
it; we are just an application the administrator puts on a machine for
it to access AFS. So I'm not sure when you are imagining we do that;
when the client starts up? Or part of the installation procedure?
Requiring a separate filesystem seems unnecessarily restrictive.
And I still don't see how that helps. Making an fs with a reservation
would definitely limit us to the specified space, but we still can't
get
an accurate picture of the current disk usage. I already mentioned why
using statvfs is not usable with that commit delay.
OK, so the problem you are trying to solve is "how much stuff can I
place in the remaining free space?" I don't think this is knowable
for a dynamic file system like ZFS where metadata is dynamically
allocated.
But solving the general problem for me isn't necessary. If I could
just
get a ballpark estimate of the max overhead for a file, I would be
fine.
I haven't payed attention to it before, so I don't even have an
intuitive feel for what it is.
You don't know the max overhead for the file before it is allocated.
You could guess at a max of 3x size + at least three blocks. Since
you can't control this, it seems like the worst case is when copies=3.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss