On 12/18/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:

[ This is for discussion, it doesn't mean I'm actively working on this
   functionality at this time or that I might do so in the future. ]

When we get crypto support one way to do "secure delete" is to destroy
the key.  This is usually a much simpler and faster task than erasing
and overwriting all the data and metadata blocks that have been used.
It also provides the ability of doing time/policy based "deletion".

However for some use cases this will not be deemed sufficient [3]. It
also doesn't cover the case where crypto wasn't always used on that
physical storage.

There are userland applications like GNU shred that attempt to cover
this for a single file by opening it up and doing overwriting before
finally calling unlink(2) on it.  However shred and anything like it
that works outside of ZFS will not work with ZFS because of COW.

I'm going to use the term "bleach" here because ZFS already uses "scrub"
  which is what some other people use here, and because bleach doesn't
overload any of the specific terms used in this area.

I believe that ZFS should provide a method of bleaching a disk or part
of it that works without crypto having ever been involved.

Currently OpenSolaris only provides what format(1M) gives with the
analyze/purge command[1].

This doesn't help in some cases because it requires that it be run on
the whole disk.  It is also not implementing the current recommendation
from NIST as documented in NIST-800-88 [2].

I think with ZFS we need to provide a method of bleaching that is
compatible with COW, and doesn't require we do it on the whole disk,
because this is very time consuming. Ideally it should also be
compatible with hot sparing.

I think we need 5 distinct places to set the policy:

1) On file delete
        This would be a per dataset policy.
        The bleaching would happen in a new transaction group
        created by the one that did the normal deletion, and would
        run only if the original one completed.  It needs to be done in
        such away that the file blocks aren't on the free list until
        after the bleaching txg is completed.

2) On ZFS data set destroy
        A per pool policy and possibly per dataset with inheritance.
        As above for the txg and the free blocks.

3) On demand for a pool without destroying active data.
        This is similar to today's scrub, it is a background task
        that we start off periodically and view the status of it
        via zpool status.

4) On pool destroy (purposely breaks import -d)

5) On hotsparing, bleach the outgoing disk.

When doing all of these (but particularly 2 through 5) we need a way to
tell the admin that this has been completed:  I think zpool status is
probably sufficient initially but we might want to do more.

For case 4 zpool destroy would not return until all the data has been
bleached - this might take some time so we should provide progress.

Option 3 is needed for the case where we have no policy, ie 1 and 2
aren't in effect and we know that soon we need to do a disk replacement.

With option 5 we would spare in the new disk and start doing a full
media bleaching on the outgoing disk.  In this case zpool status would
show that a bleaching is in progress on that disk and that the admin
should wait until it completes before physical removal.

Instead of just implementing the current NIST algorithm it would be much
more helpful if we made this exensible, not necessarily fully plugable
without modifying the ZFS source. The CMRR project at UCSD has a good
paper on the security/speed tradeoffs [4]. Initially we would provide at
least the following bleaching methods:

1) N passes of zeros, default being 1.
2) Same algorithm that format(1M) uses today.
3) NIST-800-88 compliant method.
4) others to be discussed.

One of the tools from the CMRR project specifically takes advantage of
features of ATA drives, I don't think we should do that in ZFS because
we could be dealing with a pool created from a mix of different drive
types or we could be dealing with things like iSCSI targets which are
really ZVOLs "on the other side".


Thoughts ?


one Idea that would seem easy for people that aren't totally paranoid would
be to pass the group of blocks that contained the deleted file to the top
of  the list of available blocks to be used for new writes, so that at least
no matter what else happens in a few minutes the data will be over written
at least once. Perhaps this should happen with all levels of secure delete,
so that even after the blocks have been overwritten with 6 times with random
data, it would be next to receive a new file. This might even help
performance since the hard disk heads should still be near the blocks of
overwritten data.

Of course the converse might also be nice, if I'm a home user and I have
been known to make errors it might be nice to have recently deleted blocks
not written over for a while, of course someone would still need to write a
file rescue utility to benefit from this but it could a be tunable option
either per pool or per file system.

James Dickens
uadmin.blogspot.com




References:

[1]

http://www.sun.com/software/solaris/trustedsolaris/ts_tech_faq/faqs/purge.xml

[2]
http://csrc.nist.gov/publications/nistpubs/800-88/NISTSP800-88_rev1.pdf

[3] 09-11-06 update to [2] on page 7.

[4] http://cmrr.ucsd.edu/hughes/subpgset.htm


--
Darren J Moffat
_______________________________________________
security-discuss mailing list
[EMAIL PROTECTED]

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to