Darren J Moffat wrote:
Darren Reed wrote:
Hmmm, well, I suppose the same problem might apply to
encrypting data too...so maybe what I need is a zfs command
that will walk the filesystem's data tree, read in data and
write it back out according to the current data policy.
And if that file system is multiple terrabytes would you be okay with
there being a read and write lock while this runs ?
I thought about this and my answer might surprise you:
Yes - so long as it is interruptible.
Granted it could take a very long time to complete, but won't it be
quicker than doing anything else to achieve the same result?
i.e. how long would it take to backup those multiple terrabytes to tape
and then restore them?
Or even to transfer them to a new filesystem?
But...couldn't you also snapshot/clone a (large) filesystem, turn on
encryption, start encypting those 10TB of data (the read/write lock
won't impact people using the parent) and then use a combination of zfs
send/receieve and swapping snapshots/clones around with the parent to
achieve an in-place encryption without locking any "live" filesystems?
I suppose this sounds complex, but I don't think any live migration to
using encryption (while aiming for 0 down time) will be easy.
This may not work so well for encrypted data if encryption
is disabled, but I'm not sure that is such a good idea.
The current plan is that encryption must be turned on when the file
system is created and can't be turned on later. This means that the
zfs-crypto work depends on the RFE to set properties at file system
creation time.
You also won't be able to turn crypto off for a given filesystem later
(because you won't know when all the data is back in the clear again
and you can safely destroy the key).
Ok, I can understand that, both from a security perspective and a "lets
not let them shoot themselves in the foot".
Darren
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss