Looking for help regaining access to
encrypted ZFS file systems that
stopped accepting the encryption key.

I have a file server with a setup
as follows:

Solaris 11 Express 1010.11/snv_151a
8 x 2-TB disks, each one divided
into three equal size partitions,
three raidz3 pools built from a
"slice" across matching partitions:


 Disk 1  Disk 8  zpools
 +--+    +--+
 |p1| .. |p1| <- slice_0
 +--+    +--+
 |p2| .. |p2| <- slice_1
 +--+    +--+
 |p3| .. |p3| <- slice_2
 +--+    +--+

zpool status shows:

 ...
 NAME          STATE
 slice_0       ONLINE
   raidz3-0    ONLINE
     c7t0d0s0  ONLINE
     c7t1d0s0  ONLINE
     c7t2d0s0  ONLINE
     c7t3d0s0  ONLINE
     c7t4d0s0  ONLINE
     c7t5d0s0  ONLINE
     c7t6d0s0  ONLINE
     c7t7d0s0  ONLINE
 ...

And several file systems on each pool:
zfs list shows:

 rpool
 ...
 rpool/export
 rpool/export/home
 rpool/export/home/user1
 ...
 slice_0
 slice_0/base
 slice_0/base/fsys_0_1
 ...
 slice_0/base/fsys_0_last
 slice_1
 slice_1/base
 slice_1/base/fsys_1_1
 ...
 slice_1/base/fsys_1_last
 ...
etc.

The intermediate "base" file systems
are there only to set attributes
to be inherited by all other file
systems in the same pool.

They were created with encryption
on, forcing all others to be encrypted.

The keysource for slice_?/base
was set to
  "passphrase,prompt"
while creating the file systems.

Then I stored the keys (one key per
pool) in files in a subdirectory
of home/user1, and set keysource for
slice_0/base to
  "passphrase,file:///export/home/user1/keys/key_0"
(Similarly for the other two pools)

So far so good.
Several weeks and several terabytes
of data later, I decided to relocate
the files with the encryption keys
from a subdir of user1 to a subdir
of root. Copied the files and set
slice_0/base keysource to
  "passphrase,file:///root/keys/key_0", etc.

That broke it. After doing that, the base
file systems (that contain no data files)
can be mounted, but trying to mount any
other fs fails with the message:
"cannot load key for 'slice_?/base/fsys_?_?': incorrect key.

Using "zfs set" I can set the keysource
back and forth to the original directory
and the new one, or to prompt, etc.
I can change the "canmount" attribute,
etc., but not actually mount anything.

Tried changing the files attributes
to readable by all or only by owner.
Tried setting the keysource locally for
each fs with no success (other than not
being able to set it back to inherited
from base.)

Any other thing I can do? Most of the
data is either old junk or things I can
rip again or download again, but there
are some files I can not recover from
anywhere else.

Thanks,

--
Roberto Waltman
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to