On Wed, 20 May 2009, Darren J Moffat wrote:

> How Sun Services reports the status of escalations to customers under
> contract is not a discussion for a public alias like this so I won't
> comment on this.

Heh, but maybe it should be a discussion for some internal forum; more
information = less anxious customers :)...

> If the engineers that are working on this wish to comment I'm sure they
> will, but I know it really isn't that simple.

I hope they do, as an information vacuum tends to result in false
assumptions.

> I do; because I've done it to my own personal data pool.  However it is
> not a procedure I'm willing to tell anyone how to do - so please don't
[...]
> implementing this for a generic solution in a way that is safe to do and
> that works for all types of pool and slog config (mine was a very simple
> configuration: mirror + slog).

Hmm, well, it just seems horribly wrong for the failure of a slog to result
in complete data loss, *particularly* when all of the data is perfectly
valid and just sitting there beyond your reach.

One suggestion I received off-list was to dump your virgin slog right
after creation (I did a dd=/dev/zero of=/dev/dsk/<slogtobe>, a zfs add pool
log <slog>, then a dd if=/dev/<slog> of=slog.dd count=<blocks until
everything is zeros>) and then you could restore it if you lost your slog.
I tested this, and sure enough if I wrote the slog dump to the corrupted
device the pool was happy again. This only seemed to work if I restored it
to the exact same device it was on before, restoring it to a different
device didn't work (I thought zfs was device name agnostic?).


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to