[EMAIL PROTECTED] wrote on 12/22/2006 04:50:25 AM:
> Hello Wade,
>
> Thursday, December 21, 2006, 10:15:56 PM, you wrote:
>
>
>
>
>
> WSfc> Hola folks,
>
> WSfc> I am new to the list, please redirect me if I am posting
> to the wrong
> WSfc> location. I am starting to use ZFS in production (Solaris x86 10U3
--
> WSfc> 11/06) and I seem to be seeing unexpected behavior for zfs list and
> WSfc> snapshots. I create a filesystem (lets call it a/b where a isthe
pool).
> WSfc> Now, if I store 100 gb of files on a/b and then snapshot a/[EMAIL
> PROTECTED]
then
> WSfc> delete about 50 gb of files from a/b -- I expect to see ~50
gb"USED" on
> WSfc> both a/b and a/[EMAIL PROTECTED] via zfs list output -- instead I only
> seem to see the
> WSfc> delta block adds as "USED" (~20mb) on a/[EMAIL PROTECTED] Is this
> correct behavior?
> WSfc> how do you track the total delta blocks the snap is using vs other
snaps
> WSfc> and live fs?
>
> This is almost[1] ok. When you delete a file from a file system you
> definitely expect to see that the file system allocated space reduced
> by about the same size.
>
> [1] the problem is that space consumed by snapshot isn't entirely
> correct and once you delete snapshot you'll actually get some more
> space than zfs list reported for that snapshot as used space. It's not
> a big deal but still it makes it harder to determine exactly how much
> space is allocated for snapshots for a given file system.
>
Well this is a problem for me, in the case I showed above the snapshot
USAGE in zfs list is not only a little off on how much space it actually is
reserving for the delta blocks -- it is 50gb off out of a of 52.002gb
delta. Now this is a test case -- where I actually know the delta. When
this goes into production and I need to snap 6+ times a day on dynamic
filesystems, how am I to programmatically determine how many snaps need to
"fall off" over time to keep the maximum amount of snapshots while
retaining enough free pool space for new live updates? I find it hard to
believe that with all of the magic of zfs (it is a truly great leap in fs)
that I am expected to remove tail snaps until I free enough space on the
pool blindly.
I have to assume there is a more valid metric for how much pool is reserved
for a snap in time somewhere, or this zfs list is reporting buggy data...
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss