comment below...

Stefan Olsson wrote:
IMPORTANT: This message is private and confidential. If you have received this 
message in error, please notify us and remove it from your system.

please notify your lawyers that this message is now on the internet and
publically archived forever :-)

Hello,

I have two USB-drives connected to my PC with an zpool on each, one
called TANK, the other IOMEGA. After some problems this morning I
managed to get the IOMEGA-pool to work but have less luck with the
TANK-pool. -When I run "zpool import" and would expect to get some state
of "TANK" I instead get " pool: IOMEGA
    id: 9922963935057378355
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported
using
        the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

        IOMEGA      FAULTED  corrupted data
          c4t0d0    ONLINE"
---------------
When running an zpool status I get this:
"  pool: IOMEGA
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        IOMEGA      ONLINE       0     0     0
          c8t0d0    ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c6t0d0s0  ONLINE       0     0     0
            c6t2d0s0  ONLINE       0     0     0"

In other words the actual IOMEGA-pool appears on a drive which is c8t0d0
and the pool is marked as OK, but then the USB-drive on C4t0d0 appears
to have an zpool called IOMEGA as well although it really contains the
TANK-pool!

ZFS maintains a cache of what pools were imported so that at boot time,
it will automatically try to re-import the pool. The file is /etc/zfs/zpool.cache
and you can view its contents by using "zdb -C"

If the current state of affairs does not match the cache, then you can
export the pool, which will clear its entry in the cache.  Then retry the
import.
-- richard

What really worries me is that zfs for some reason has started to treat
a drive which belonged to one pool as if it was belonging to another
pool. Could this happen with other non-USB drives in other configuration
scenarios such as mirrors or raidz? I suppose anything can happen on Friday the 13th...
Cheers,

   Stefan Olsson

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to