I recently lost all of the data on my single parity raid z array.  Each of the 
drives was encrypted with the zfs array built within the encrypted volumes.

I am not exactly sure what happened.  The files were there and accessible and 
then they were all gone.  The server apparently crashed and rebooted and 
everything was lost.  After the crash I remounted the encrypted drives and the 
zpool was still reporting that roughly 3TB of the 7TB array were used, but I 
could not see any of the files through the array's mount point.  I unmounted 
the zpool and then remounted it and suddenly zpool was reporting 0TB were used. 
 I did not remap the virtual device.  The only thing of note that I saw was 
that the name of storage pool had changed.  Originally it was "Movies" and then 
it became "Movita".  I am guessing that the file system became corrupted some 
how.  (zpool status did not report any errors)

So, my questions are these... 

Is there anyway to undelete data from a lost raidz array?  If I build a new 
virtual device on top of the old one and the drive topology remains the same, 
can we scan the drives for files from old arrays?

Also, is there any way to repair a corrupted storage pool?  Is it possible to 
backup the file table or whatever partition index zfs maintains?


I imagine that you all are going to suggest that I scrub the array, but that is 
not an option at this point.  I had a backup of all of the data lost as I am 
moving between file servers so at a certain point I gave up and decided to 
start fresh.  This doesn't give me a warm fuzzy feeling about zfs, though.

Thanks,
-Mike
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to