-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jeff Bacon wrote:
> I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk
> stripe. They're all supermicro-based with retail LSI cards.
> 
> I've noticed a tendency for things to go a little bonkers during the
> weekly scrub (they all scrub over the weekend), and that's when I'll
> lose a disk here and there. OK, fine, that's sort of the point, and
> they're SATA drives so things happen. 
> 
> I've never lost a pool though, until now. This is Not Fun. 
> 
>> ::status
> debugging crash dump vmcore.0 (64-bit) from ny-fs4
> operating system: 5.10 Generic_142901-10 (i86pc)
> panic message:
> BAD TRAP: type=e (#pf Page fault) rp=fffffe80007cb850 addr=28 occurred
> in module "zfs" due to a NULL pointer dereference
> dump content: kernel pages only
>> $C
> fffffe80007cb960 vdev_is_dead+2()
> fffffe80007cb9a0 vdev_mirror_child_select+0x65()
> fffffe80007cba00 vdev_mirror_io_start+0x44()
> fffffe80007cba30 zio_vdev_io_start+0x159()
> fffffe80007cba60 zio_execute+0x6f()
> fffffe80007cba90 zio_wait+0x2d()
> fffffe80007cbb40 arc_read_nolock+0x668()
> fffffe80007cbbd0 dmu_objset_open_impl+0xcf()
> fffffe80007cbc20 dsl_pool_open+0x4e()
> fffffe80007cbcc0 spa_load+0x307()
> fffffe80007cbd00 spa_open_common+0xf7()
> fffffe80007cbd10 spa_open+0xb()
> fffffe80007cbd30 pool_status_check+0x19()
> fffffe80007cbd80 zfsdev_ioctl+0x1b1()
> fffffe80007cbd90 cdev_ioctl+0x1d()
> fffffe80007cbdb0 spec_ioctl+0x50()
> fffffe80007cbde0 fop_ioctl+0x25()
> fffffe80007cbec0 ioctl+0xac()
> fffffe80007cbf10 _sys_sysenter_post_swapgs+0x14b()
> 
>   pool: srv
>     id: 9515618289022845993
>  state: UNAVAIL
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
>         devices and try again.
>    see: http://www.sun.com/msg/ZFS-8000-6X
> config:
> 
>         srv                        UNAVAIL  missing device
>           raidz2                   ONLINE
>             c2t5000C5001F2CCE1Fd0  ONLINE
>             c2t5000C5001F34F5FAd0  ONLINE
>             c2t5000C5001F48D399d0  ONLINE
>             c2t5000C5001F485EC3d0  ONLINE
>             c2t5000C5001F492E42d0  ONLINE
>             c2t5000C5001F48549Bd0  ONLINE
>             c2t5000C5001F370919d0  ONLINE
>             c2t5000C5001F484245d0  ONLINE
>           raidz2                   ONLINE
>             c2t50000F000B5C8187d0  ONLINE
>             c2t50000F000B5C8157d0  ONLINE
>             c2t50000F000B5C9101d0  ONLINE
>             c2t50000F000B5C8167d0  ONLINE
>             c2t50000F000B5C9120d0  ONLINE
>             c2t50000F000B5C9151d0  ONLINE
>             c2t50000F000B5C9170d0  ONLINE
>             c2t50000F000B5C9180d0  ONLINE
>           raidz2                   ONLINE
>             c2t5000C50010A88E76d0  ONLINE
>             c2t5000C5000DCD308Cd0  ONLINE
>             c2t5000C5001F1F456Dd0  ONLINE
>             c2t5000C50010920E06d0  ONLINE
>             c2t5000C5001F20C81Fd0  ONLINE
>             c2t5000C5001F3C7735d0  ONLINE
>             c2t5000C500113BC008d0  ONLINE
>             c2t5000C50014CD416Ad0  ONLINE
> 
>         Additional devices are known to be part of this pool, though
> their
>         exact configuration cannot be determined.
> 
> 
> All of this would be ok... except THOSE ARE THE ONLY DEVICES THAT WERE
> PART OF THE POOL. How can it be missing a device that didn't exist? 
> 
> A "zpool import -fF" results in the above kernel panic. This also
> creates /etc/zfs/zpool.cache.tmp, which then results in the pool being
> imported, which leads to a continuous reboot/panic cycle. 
> 
> I can't obviously use b134 to import the pool without logs, since that
> would imply upgrading the pool first, which is hard to do if it's not
> imported. 
> 
> My zdb skills are lacking - zdb -l gets you about so far and that's it.
> (where the heck are the other options to zdb even written down, besides
> in the code?)
> 
> OK, so this isn't the end of the world, but it's 15TB of data I'd really
> rather not have to re-copy across a 100Mbit line. It really more
> concerns me that ZFS would do this in the first place - it's not
> supposed to corrupt itself!!
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Hi Jeff,



looks similar to a crash I had here at our site a few month ago. Same
symptoms, no actual solution. We had to recover from a rsync backup server.


We had the logs on an mirrored SSD and an additional SSD as cache.

The machine (SUN 4270 with SUN J4400 JBODS and SUN SAS disks) crashed in
the same manner (core dumping while trying to import the pool). After
booting into single user mode we found the log pool mirror corrupted
(one disk unavailbale). Even after replacing the disk and resilvering
the log mirror we were not able to import the pool.

I suggest that it may has been related to memory (perhaps a lack of memory).


all the best


Carsten





- --
Max Planck Institut fuer marine Mikrobiologie
- - Network Administration -
Celsiustr. 1
D-28359 Bremen
Tel.: +49 421 2028568
Fax.: +49 421 2028565
PGP public key:http://www.mpi-bremen.de/Carsten_John.html
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFMflJmsRCwZeehufsRAu8PAJ9KWHn5Yf0aqhRdZkw4rDzXKECTvwCfZsla
LofvmMOcTMHITF3VyWCa8ho=
=jell
-----END PGP SIGNATURE-----
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to