I am bringing this up again with the hopes that more eye may be on the list
now then before the holidays..
the zfs man page lists the usage column as:
used
The amount of space consumed by this dataset and all its
descendants. This is the value that is checked against
this dataset's quota and reservation. The space used
does not include this dataset's reservation, but does
take into account the reservations of any descendant
datasets. The amount of space that a dataset consumes
from its parent, as well as the amount of space that
will be freed if this dataset is recursively destroyed,
is the greater of its space used and its reservation.
When snapshots (see the "Snapshots" section) are
created, their space is initially shared between the
snapshot and the file system, and possibly with previous
snapshots. As the file system changes, space that was
previously shared becomes unique to the snapshot, and
counted in the snapshot's space used. Additionally,
deleting snapshots can increase the amount of space
unique to (and used by) other snapshots.
The amount of space used, available, or referenced does
not take into account pending changes. Pending changes
are generally accounted for within a few seconds. Com-
mitting a change to a disk using fsync(3c) or O_SYNC
does not necessarily guarantee that the space usage
information is updated immediately.
which is not the behavior I am seeing.. If I have 100 snaps of a
filesystem that are relatively low delta churn and then delete half of the
data out there I would expect to see that space go up in the used column
for one of the snaps (in my tests cases I am deleting 50gb out of 100gb
filesystem and showing no usage increase on any of the snaps). I am
planning on having many many snaps on our filesystems and programmatically
during old snaps as space is needed -- when zfs list does not attach delta
usage to snaps it makes this impossible (without blindly deleting snaps,
waiting an unspecified period until zfs list is updated and repeat). Is
this really the behavior that is expected, am I missing some more specific
usage data, or is this some sort of bug?
Also another thing that is not really specified in the documentation is
where this delta space usage would be listed -- what makes sense to me
would be to have the oldest snap that owns the blocks take the usage hit
for them and move the usage hit up to the next snap as the oldest is
deleted.
> WSfc> Hola folks,
>
> WSfc> I am new to the list, please redirect me if I am posting
> to the wrong
> WSfc> location. I am starting to use ZFS in production (Solaris x86 10U3
--
> WSfc> 11/06) and I seem to be seeing unexpected behavior for zfs list and
> WSfc> snapshots. I create a filesystem (lets call it a/b where a isthe
pool).
> WSfc> Now, if I store 100 gb of files on a/b and then snapshot a/[EMAIL
> PROTECTED]
then
> WSfc> delete about 50 gb of files from a/b -- I expect to see ~50
gb"USED" on
> WSfc> both a/b and a/[EMAIL PROTECTED] via zfs list output -- instead I only
> seem to see the
> WSfc> delta block adds as "USED" (~20mb) on a/[EMAIL PROTECTED] Is this
> correct behavior?
> WSfc> how do you track the total delta blocks the snap is using vs other
snaps
> WSfc> and live fs?
>
> This is almost[1] ok. When you delete a file from a file system you
> definitely expect to see that the file system allocated space reduced
> by about the same size.
>
> [1] the problem is that space consumed by snapshot isn't entirely
> correct and once you delete snapshot you'll actually get some more
> space than zfs list reported for that snapshot as used space. It's not
> a big deal but still it makes it harder to determine exactly how much
> space is allocated for snapshots for a given file system.
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss