Hi,

Yesterday I got all my disks in two zpool disconected.
They are not real disks - LUNS from StorageTek 2530 array.
What could that be - a failing LSI card or a mpt driver in 2009.06? After reboot got four disks in FAILED state - zpool clear fixed
things with resilvering.

Here is how it started (/var/adm/messages)

Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info] /p...@0,0/pci10de,5...@e/pci1000,3...@0 (mpt0):
Feb 23 12:39:03 nexus   Log info 0x31140000 received for target 2.
...
Feb 23 12:39:06 nexus scsi: [ID 107833 kern.warning] WARNING: /p...@0,0/pci10de,5...@e/pci1000,3...@0/s...@2,9 (sd58):
Feb 23 12:39:06 nexus   Command failed to complete...Device is gone

Feb 23 12:39:06 nexus scsi: [ID 107833 kern.warning] WARNING: /p...@0,0/pci10de,5...@e/pci1000,3...@0/s...@2,7 (sd56):
Feb 23 12:39:06 nexus   Command failed to complete...Device is gone
...


# fmdump -eV -t "23Feb10 12:00"

TIME                           CLASS
Feb 23 2010 12:39:03.856423656 ereport.io.scsi.cmd.disk.tran
nvlist version: 0
        class = ereport.io.scsi.cmd.disk.tran
        ena = 0x37f293365c100801
        detector = (embedded nvlist)
        nvlist version: 0
                version = 0x0
                scheme = dev
                device-path = /p...@0,0/pci10de,5...@e/pci1000,3...@0/s...@2,2
        (end detector)

        driver-assessment = retry
        op-code = 0x2a
        cdb = 0x2a 0x0 0x22 0x14 0x51 0xab 0x0 0x0 0x4 0x0
        pkt-reason = 0x4
        pkt-state = 0x0
        pkt-stats = 0x8
        __ttl = 0x1
        __tod = 0x4b8412b7 0x330bfce8
...

Feb 23 2010 12:39:06.840406312 ereport.fs.zfs.io
nvlist version: 0
        class = ereport.fs.zfs.io
        ena = 0x37fdb0f5dc000401
        detector = (embedded nvlist)
        nvlist version: 0
                version = 0x0
                scheme = zfs
                pool = 0x26b9a51f199f72bf
                vdev = 0xaf3ea54be8e5909c
        (end detector)

        pool = pool2530-2
        pool_guid = 0x26b9a51f199f72bf
        pool_context = 0
        pool_failmode = wait
        vdev_guid = 0xaf3ea54be8e5909c
        vdev_type = disk
        vdev_path = /dev/dsk/c8t2d9s0
        vdev_devid = id1,s...@n600a0b800036a8ba000007484adc4dec/a
        parent_guid = 0xff4853b09cdcb0bb
        parent_type = raidz
        zio_err = 6
        zio_offset = 0x42000
        zio_size = 0x2000
        __ttl = 0x1
        __tod = 0x4b8412ba 0x32179528




The system configuration:
SunFire X4200, LSI_1068E - 1.18.00.00
StorageTek 2530 with 1TB WD3 SATA drives, not JBOD:

Port Name         Chip Vendor/Type/Rev    MPT Rev  Firmware Rev  IOC
 1.  mpt0              LSI Logic SAS1068E B1     105      01120000     0

Current active firmware version is 01120000 (1.18.00)
Firmware image's version is MPTFW-01.18.00.00-IT
LSI Logic x86 BIOS image's version is MPTBIOS-6.12.00.00 (2006.10.31)
FCode image's version is MPT SAS FCode Version 1.00.40 (2006.03.02)

# uname -a
SunOS nexus 5.11 snv_111b i86pc i386 i86pc Solaris

Does anyone have any problem with these LSI cards in NexentaStor?

Thanks
Evgueni
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to