> I think I'll try booting from a b134 Live CD and see
> that will let me fix things.
Sadly it appears not - at least not straight away.
Running "zpool import" now gives
pool: storage2
id: 14701046672203578408
state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
storage2 FAULTED corrupted data
raidz1-0 FAULTED corrupted data
c6t4d2 ONLINE
c6t4d3 ONLINE
c7t4d2 ONLINE
c7t4d3 ONLINE
raidz1-1 FAULTED corrupted data
c7t4d0 ONLINE
replacing-1 UNAVAIL insufficient replicas
c6t4d0 FAULTED corrupted data
c9t4d4 UNAVAIL cannot open
c7t4d1 ONLINE
c6t4d1 ONLINE
If I do "zpool import -f storage2" it complains about devices being faulted and
suggests destroying the pool.
If I do "zpool clean storage2" or "zpool clean storage2 c9t4d4" these say that
storage2 does not exist.
If I do "zpool import -nF storage2" this says that the pool was last run on
another system and prompts for "-f".
if I do "zpool import -fnF storage2" this appears to quit silently.
I don't really understand why the installed system is very specific about the
problem being with the intent log (and suggesting it just needs clearing) but
booting from the b134 CD doesn't pick up on that, unless it's being masked by
the hostid mismatch error. Because of that I'm thinking that I should try to
change the hostid when booted from the CD to be the same as the previously
installed system to see if that helps - unless that's likely to confuse it at
all...?
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss