In a thread elsewhere, trying to analyse why the zfs auto-snapshot
cleanup code was cleaning up more aggressively than expected, I
discovered some interesting properties of a zvol. 

http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2010-January/000232.html

The zvol is not thin-provisioned. The entire volume has been written
to (it was dd'd off a physical disk), and:

 volsize = refreservation
 referenced = usedbydataset = (volsize + a little overhead)

This is as expected.  Not expected is that:

 usedbyrefreservation = refreservation

I would expect this to be 0, sinnce all the reserved space has been
allocated.  As a result, used is over twice the size of the volume (+
a few small snapshots as well).

I think others may have have seen similar problems; it may be the root
cause behind several other complaints that time-slider-cleanup deleted
snapshots to free up space, when the pool still had plenty free.

A quick followup test shows that usedbyrefreservation behaves as
expected, for a new test zvol.

http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2010-January/000233.html

So apparently it may be a problem picked up along the upgrade path
through many zpool version upgrades.  The pool, and the zvol, would
first have been created on b111 or shortly after.  It has been used
with both xvm kernels, and native kernels running virtualbox, in that
time. 

Who can help me figure out what's going on with the older zvol?  Any
useful zdb info I can dump out?   I could "fix" it by copying and
replacing the zvol, getting compression and dedup in the process, but
before I do I don't want to destroy what may be useful debug info.

I'll check later whether the send|recv snapshots of this zvol on my
backup server show similar problems, but I doubt they will.

--
Dan.

Attachment: pgpccnSWWxhfy.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to