I decided to do some more test situations to try to figure out how
adding/removing snapshots changes the space used reporting.

First I setup a test area, a new zfs file system and created some test
files and then created snapshots removing the files one by one.

> mkfile 1m 0
> mkfile 1m 1
> mkfile 1m 2
> mkfile 1m 3
> zfs snapshot u01/[EMAIL PROTECTED]
> rm 0
> zfs snapshot u01/[EMAIL PROTECTED]
> rm 1                  
> zfs snapshot u01/[EMAIL PROTECTED]
> rm 2
> zfs snapshot u01/[EMAIL PROTECTED]
> rm 3
> zfs list -r u01/foo
        NAME        USED  AVAIL  REFER  MOUNTPOINT
        u01/foo    4.76M  1.13T  55.0K  /u01/foo
        u01/[EMAIL PROTECTED]  1.18M      -  4.56M  -
        u01/[EMAIL PROTECTED]  50.5K      -  3.43M  -
        u01/[EMAIL PROTECTED]  50.5K      -  2.31M  -
        u01/[EMAIL PROTECTED]  50.5K      -  1.18M  -


So 4M used on the file system but only 1M used by the snapshots so they
claim.

If I delete @1

> zfs destroy u01/[EMAIL PROTECTED]
> zfs list -r u01/foo  
        NAME        USED  AVAIL  REFER  MOUNTPOINT
        u01/foo    4.71M  1.13T  55.0K  /u01/foo
        u01/[EMAIL PROTECTED]  2.30M      -  4.56M  -
        u01/[EMAIL PROTECTED]  50.5K      -  2.31M  -
        u01/[EMAIL PROTECTED]  50.5K      -  1.18M  -

now suddenly the @0 snapshot is claiming to use more space?

If I delete the newest of the snapshots @3

> zfs destroy u01/[EMAIL PROTECTED]
> zfs list -r u01/foo
        NAME        USED  AVAIL  REFER  MOUNTPOINT
        u01/foo    4.66M  1.13T  55.0K  /u01/foo
        u01/[EMAIL PROTECTED]  2.30M      -  4.56M  -
        u01/[EMAIL PROTECTED]  50.5K      -  2.31M  -

No change in the claimed used space by the @0 snapshot!

Now I delete the @2 snapshot

> zfs destroy u01/[EMAIL PROTECTED]
> zfs list -r u01/foo  
        NAME        USED  AVAIL  REFER  MOUNTPOINT
        u01/foo    4.61M  1.13T  55.0K  /u01/foo
        u01/[EMAIL PROTECTED]  4.56M      -  4.56M  -

The @0 snapshot finally claims all the space it's really been holding
all along.

Up until that point other than subtracting space used (as reported by
df) from the used for the file system as reported by zfs there was no
way to know how much space the snapshots were really using.

Something is not right in the space accounting for snapshots.

---
Adam 

> -----Original Message-----
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> [EMAIL PROTECTED]
> Sent: Friday, November 07, 2008 14:21
> To: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Disk space usage of zfs snapshots 
> andfilesystems-my math doesn't add up
> 
> I really think there is something wrong with how space is 
> being reported
> by zfs list in terms of snapshots.
> 
> Stealing for the example earlier where a new file system was 
> created, 10
> 1MB files were created and then do snap, remove a file, snap, remove a
> file, until they are all gone and you are left with:
>  
> > bash-3.2# zfs list -r root/export/home/carton/t
> > NAME                          USED  AVAIL  REFER  MOUNTPOINT
> > root/export/home/carton/t    10.2M  26.6G    18K  
> > /export/home/carton/t
> > root/export/home/carton/[EMAIL PROTECTED]  1.02M      -  10.0M  -
> > root/export/home/carton/[EMAIL PROTECTED]    18K      -  9.04M  -
> > root/export/home/carton/[EMAIL PROTECTED]    18K      -  8.04M  -
> > root/export/home/carton/[EMAIL PROTECTED]    18K      -  7.03M  -
> > root/export/home/carton/[EMAIL PROTECTED]    17K      -  6.03M  -
> > root/export/home/carton/[EMAIL PROTECTED]    17K      -  5.03M  -
> > root/export/home/carton/[EMAIL PROTECTED]    17K      -  4.03M  -
> > root/export/home/carton/[EMAIL PROTECTED]    17K      -  3.02M  -
> > root/export/home/carton/[EMAIL PROTECTED]    17K      -  2.02M  -
> > root/export/home/carton/[EMAIL PROTECTED]    17K      -  1.02M  -
> 
> So the file system itself is now empty of files (18K refer 
> for overhead)
> but still using 10MB because of the snapshots still holding 
> onto all 10
> 1MB files.
> 
> By how I understand snapshots the oldest one actually holds the file
> after it is deleted and any newer snapshot just points to what that
> oldest one is holding.
> 
> So because the 0 snapshot was taken first it knows about all 10 files,
> snap 1 only knows about 9, etc.
> 
> The refer numbers all match up correctly as that is how much data
> existed at the time of the snapshot.
> 
> But the used seems wrong.
> 
> The 0 snapshot should be holding onto all 10 files so I would 
> expect it
> to be 10MB "Used" when it's only reporting 1MB used.  Where 
> is the other
> 9MB hiding?  It only exists because a snapshot is holding it so that
> space should be charged to a snapshot.  Since snapshot 1-9 should only
> be pointing at the data held by 0 their numbers are correct.
> 
> To take the idea further you can delete snapshots 1-9 and snapshot 0
> will still say it has 1MB "Used", so where again is the other 9MB?
> 
> Adding up the total "used" by snapshots and the "refer" by the file
> system *should* add up to the "used" for the file system for it all to
> make sense right?
> 
> Another way to look at it if you have all 10 snapshots and 
> you delete 0
> I would expect snapshot 1 to change from 18K used (overhead) 
> to 9MB used
> since it would now be the oldest snapshot and official holder of the
> data with snapshots 2-9 now pointing at the data it is holding.  The
> first 1MB file delete would now be gone forever.
> 
> Am I missing something or is the math to account for snapshot 
> space just
> not working right in zfs list/get?
> 
> Adam
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to