Heya,

> SL> 1) Doing a zpool destroy on the volume
> SL> 2) Doing a zpool import -D on the volume
> SL> It would appear to me that primarily what has occurred is one or all of
> SL> the metadata stores ZFS has created have become corrupt? Will a zpool
> SL> import -D ignore metadata and rebuild using some magic foo?
> It won't help. zpool import -D doesn't do anything special compared to
> standard import - it just ignores flag indicating pool was destroyed.

I suspected this but thought it was worth putting out there. The other
thought I had was possibly doing a zpool destroy then a zpool create
with half of the segments (ie. just 1 of the arrays) in the same order.
Given that these are basically a concat of mirrors I wondered if this
would mean that the data would at least be partially accessible.

I suspect however that the metadb would not recognise any of the files
on the unit (ie. it'd simply assume it was 'null' data ready for being
overwritten)? I wonder whether there's something like ext3's 'use
alternate superblock'? I do assume though that if the array is already
in 'FAULTED' mode ZFS has checked and concluded that all alternate
copies of the metadb are corrupt?

Is it not at all possible to search for all defined files and regenerate
the metadb? Utopian perhaps but I'm still completely baffled as to how
on earth ZFS managed to sync corruption to our mirror. :-/

Realistically, I wouldn't be too concerned if I could at least 'mark'
all parts of 1 of the arrays as 'clean' to a point where I can at least
perform an import and sync as much data as possible off the array.
Possible?

Cheers,

Stuart
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to