For years I have been running a zpool using a Fibre Channel array with
no problems. I would scrub every so often and dump huge amounts of
data (tens or hundreds of GB) around and it never had a problem
outside of one confirmed (by the array) disk failure.

I upgraded to sol10x86 05/09 last year and since then I have
discovered any sufficiently high I/O from ZFS starts causing timeouts
and off-lining disks. This leads to failure (once rebooted and cleaned
all is well) long term because you can no longer scrub reliably.

ATA, SATA and SAS do not seem to suffer this problem.

I tried upgrading, and then doing a fresh load of U8 and the problem persists.

My FC hardware is:
Sun A5100 (14 disk) array.
Hitachi 146GB FC disks (started with 9GB SUN disks, moved to 36 GB
disks from a variety of manufacturers, and then to 72 GB IBM disks
before this last capacity upgrade).
Sun branded Qlogic 2310 FC cards (375-3102). Sun qlc drivers and MPIO
is enabled.
The rest of the system:
2 CPU Opteron board and chips(>2GHZ), 8GB RAM.

When a hard drive fails in the enclosure, it bypasses the bad drive
and turns on a light to let me know a disk failure has happened. This
never happens with this event, pointing it to be a software problem.

Once it goes off the rails and starts off-lining disks it causes the
system to have problems. Login for a user takes forever (40 minutes
minimum to pass the last login message), any command touching on
storage or zfs/zpool hangs for just as long.

I can reliably reproduce the issue by either copying a large amount of
data into the pool or running a scrub.

All disks test fine via destructive tests in format.

I just reproduced it by clearing and creating anew pool called share:

# zpool status share
  pool: share
 state: ONLINE
 scrub: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        share                      ONLINE       0     0     0
          raidz2                   ONLINE       0     0     0
            c0t50050767190B6C76d0  ONLINE       0     0     0
            c0t500507671908E72Bd0  ONLINE       0     0     0
            c0t500507671907A32Ad0  ONLINE       0     0     0
            c0t50050767190C4CFDd0  ONLINE       0     0     0
            c0t500507671906704Dd0  ONLINE       0     0     0
            c0t500507671918892Ad0  ONLINE       0     0     0
          raidz2                   ONLINE       0     0     0
            c0t50050767190D11E4d0  ONLINE       0     0     0
            c0t500507671915CABEd0  ONLINE       0     0     0
            c0t50050767191371C7d0  ONLINE       0     0     0
            c0t5005076719125EDBd0  ONLINE       0     0     0
            c0t50050767190E4DABd0  ONLINE       0     0     0
            c0t5005076719147ECAd0  ONLINE       0     0     0

errors: No known data errors


messages logs something like the following:


May 21 15:27:54 solarisfc scsi: [ID 243001 kern.warning] WARNING:
/scsi_vhci (scsi_vhci0):
May 21 15:27:54 solarisfc       /scsi_vhci/d...@g50050767191371c7
(sd2): Command Timeout on path
/p...@0,0/pci1022,7...@a/pci1077,1...@3/f...@0,0 (fp1)
May 21 15:27:54 solarisfc scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/d...@g50050767191371c7 (sd2):
May 21 15:27:54 solarisfc       SCSI transport failed: reason
'timeout': retrying command
May 21 15:27:54 solarisfc scsi: [ID 243001 kern.warning] WARNING:
/scsi_vhci (scsi_vhci0):
May 21 15:27:54 solarisfc       /scsi_vhci/d...@g50050767191371c7
(sd2): Command Timeout on path
/p...@0,0/pci1022,7...@a/pci1077,1...@2/f...@0,0 (fp0)
May 21 15:28:54 solarisfc scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/d...@g50050767191371c7 (sd2):
May 21 15:28:54 solarisfc       SCSI transport failed: reason
'timeout': giving up
May 21 15:32:54 solarisfc scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/d...@g50050767191371c7 (sd2):
May 21 15:32:54 solarisfc       SYNCHRONIZE CACHE command failed (5)
May 21 15:40:54 solarisfc scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/d...@g50050767191371c7 (sd2):
May 21 15:40:54 solarisfc       drive offline
May 21 15:48:55 solarisfc scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/d...@g50050767191371c7 (sd2):
May 21 15:48:55 solarisfc       drive offline
May 21 15:56:55 solarisfc scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/d...@g50050767191371c7 (sd2):
May 21 15:56:55 solarisfc       drive offline
May 21 16:04:55 solarisfc scsi: [ID 107833 kern.warning] WARNING:
/scsi_vhci/d...@g50050767191371c7 (sd2):
May 21 16:04:55 solarisfc       drive offline
May 21 16:04:56 solarisfc fmd: [ID 441519 daemon.error] SUNW-MSG-ID:
ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major
May 21 16:04:56 solarisfc EVENT-TIME: Fri May 21 16:04:56 EDT 2010
May 21 16:04:56 solarisfc PLATFORM: To Be Filled By O.E.M., CSN: To Be
Filled By O.E.M., HOSTNAME: solarisfc
May 21 16:04:56 solarisfc SOURCE: zfs-diagnosis, REV: 1.0
May 21 16:04:56 solarisfc EVENT-ID: 295d7729-9a93-47f1-de9d-ba3a08b2d477
May 21 16:04:56 solarisfc DESC: The number of I/O errors associated
with a ZFS device exceeded
May 21 16:04:56 solarisfc            acceptable levels.  Refer to
http://sun.com/msg/ZFS-8000-FD for more information.
May 21 16:04:56 solarisfc AUTO-RESPONSE: The device has been offlined
and marked as faulted.  An attempt
May 21 16:04:56 solarisfc            will be made to activate a hot
spare if available.
May 21 16:04:56 solarisfc IMPACT: Fault tolerance of the pool may be
compromised.
May 21 16:04:56 solarisfc REC-ACTION: Run 'zpool status -x' and
replace the bad device.

Fmdump reports only ZFS errors:
# fmdump -e
TIME                 CLASS
May 21 15:28:54.1367 ereport.fs.zfs.io
May 21 15:28:54.1367 ereport.fs.zfs.io
May 21 15:28:54.1367 ereport.fs.zfs.io
May 21 15:28:54.1369 ereport.fs.zfs.probe_failure
May 21 16:04:55.8976 ereport.fs.zfs.vdev.open_failed

The pool looks like this:
# zpool status share
  pool: share
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
 scrub: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        share                      DEGRADED     0     0     0
          raidz2                   ONLINE       0     0     0
            c0t50050767190B6C76d0  ONLINE       0     0     0
            c0t500507671908E72Bd0  ONLINE       0     0     0
            c0t500507671907A32Ad0  ONLINE       0     0     0
            c0t50050767190C4CFDd0  ONLINE       0     0     0
            c0t500507671906704Dd0  ONLINE       0     0     0
            c0t500507671918892Ad0  ONLINE       0     0     0
          raidz2                   DEGRADED     0     0     0
            c0t50050767190D11E4d0  ONLINE       0     0     0
            c0t500507671915CABEd0  ONLINE       0     0     0
            c0t50050767191371C7d0  FAULTED      3    50     0  too many errors
            c0t5005076719125EDBd0  ONLINE       0     0     0
            c0t50050767190E4DABd0  ONLINE       0     0     0
            c0t5005076719147ECAd0  ONLINE       0     0     0

If I had not broken out of the large data copy (700GB of 340GB
completed) it would have kept going and off-lined more disks.
If I had any hot spares in the pool (I have 2 disks for that but do
not have them in the pool right now) it would have tried to put them
in and the last time that happened it put one in  overloaded the I/O
and failed it then did the same with my second hot spare.



fcinfo reports the following for each controller:
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1
                Loss of Signal Count: 1
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0

Here is the detailed FC information:
# fcinfo remote-port -slp 210000e08b80c45c
Remote Port WWN: 50050767198e4dab
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 50050767190e4dab
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 11
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 1617
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t50050767190E4DABd0s2
Remote Port WWN: 50050767199371c7
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 50050767191371c7
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 11
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 1112
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t50050767191371C7d0s2
Remote Port WWN: 500507671998892a
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 500507671918892a
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 155
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t500507671918892Ad0s2
Remote Port WWN: 5005076719925edb
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 5005076719125edb
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1677
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 6735
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t5005076719125EDBd0s2
Remote Port WWN: 50050767198b6c76
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 50050767190b6c76
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 161
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t50050767190B6C76d0s2
Remote Port WWN: 50050767198d60c6
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 50050767190d60c6
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 424
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t50050767190D60C6d0s2
Remote Port WWN: 500507671987a32a
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 500507671907a32a
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 157
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t500507671907A32Ad0s2
Remote Port WWN: 50050767198c4cfd
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 50050767190c4cfd
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 157
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t50050767190C4CFDd0s2
Remote Port WWN: 508002000000d073
        Active FC4 Types:
        SCSI Target: unknown
        Node WWN: 508002000000d070
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0
Error has occured. HBA_ScsiReportLUNsV2 failed.  reason SCSI CHECK CONDITION
Remote Port WWN: 508002000000d074
        Active FC4 Types:
        SCSI Target: unknown
        Node WWN: 508002000000d070
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 0
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 0
                Invalid CRC Count: 0
Error has occured. HBA_ScsiReportLUNsV2 failed.  reason SCSI CHECK CONDITION
Remote Port WWN: 500507671988e72b
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 500507671908e72b
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 154
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t500507671908E72Bd0s2
Remote Port WWN: 5005076719947eca
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 5005076719147eca
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 11
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 853
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t5005076719147ECAd0s2
Remote Port WWN: 500507671995cabe
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 500507671915cabe
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 71
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 464
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t500507671915CABEd0s2
Remote Port WWN: 500507671997f832
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 500507671917f832
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 14
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 837
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t500507671917F832d0s2
Remote Port WWN: 50050767198d11e4
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 50050767190d11e4
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 20
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 862
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t50050767190D11E4d0s2
Remote Port WWN: 500507671986704d
        Active FC4 Types:
        SCSI Target: yes
        Node WWN: 500507671906704d
        Link Error Statistics:
                Link Failure Count: 0
                Loss of Sync Count: 1
                Loss of Signal Count: 0
                Primitive Seq Protocol Error Count: 0
                Invalid Tx Word Count: 157
                Invalid CRC Count: 0
        LUN: 0
          Vendor: IBM
          Product: IC35L146F2DY10-0
          OS Device Name: /dev/rdsk/c0t500507671906704Dd0s2


Unless there is a way to tell ZFS or the FC bus to chill out and not
overload things I don't see how I can make use of my hardware in
Solaris 10.

I've seen this happening on other Solaris 10 (x86 only) boxes in a
data center environment rather then my basement as well.

It's starting to look like I need to move the FC stuff to AIX, Linux,
Windows or Solaris10 circa 2006 for reliability, which is sad because
then I will no longer be able to use ZFS which has served me well for
years.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to