On 11/6/10 Nov 6, 1:35 PM, "Khushil Dep" <khushil....@gmail.com> wrote:

> Is this  an E2 chassis? Are you using interposers?

No, it¹s an SC846A chassis. There are no interposers or expanders; six
SFF-8087 ³iPass² cables go from ports on the HBA to ports on the backplane.

> Can you send output of iostat -xCzn as well as fmadm faulty please?

(please pardon my line wrap)


# iostat -xCzn
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  255.0   15.9 20667.5 1424.4  0.0  3.0    0.0   11.2   0  35 c9
   34.4    2.3 2837.7  198.5  0.0  0.4    0.0   11.1   0   5 c9t0d0
   34.3    2.3 2837.6  198.5  0.0  0.4    0.0   11.3   0   5 c9t1d0
   34.4    2.3 2837.7  198.5  0.0  0.4    0.0   11.1   0   5 c9t2d0
   35.9    1.9 2918.2  162.1  0.0  0.4    0.0   11.9   0   5 c9t3d0
   35.8    1.9 2918.3  162.1  0.0  0.5    0.0   12.1   0   5 c9t4d0
   35.8    1.9 2918.2  162.1  0.0  0.5    0.0   11.9   0   5 c9t5d0
   22.2    1.7 1703.0  171.3  0.0  0.2    0.0    9.5   0   3 c9t6d0
   22.1    1.7 1696.8  171.2  0.0  0.2    0.0    9.5   0   3 c9t7d0
  239.2   15.8 19217.1 1433.5  0.0  2.8    0.0   10.8   0  32 c10
   34.6    2.3 2837.8  198.5  0.0  0.4    0.0   10.9   0   5 c10t0d0
   34.5    2.3 2837.7  198.5  0.0  0.4    0.0   11.0   0   5 c10t1d0
   34.4    2.3 2837.6  198.5  0.0  0.4    0.0   11.3   0   5 c10t2d0
   34.5    1.9 2800.5  162.1  0.0  0.4    0.0   12.0   0   5 c10t3d0
   34.5    1.9 2800.4  162.1  0.0  0.4    0.0   12.0   0   5 c10t4d0
   22.2    1.7 1703.1  171.3  0.0  0.2    0.0    9.5   0   3 c10t5d0
   22.2    1.7 1697.0  171.2  0.0  0.2    0.0    9.3   0   3 c10t6d0
   22.3    1.7 1703.1  171.3  0.0  0.2    0.0    9.2   0   3 c10t7d0
  243.5   15.5 19527.7 1397.1  0.0  2.8    0.0   10.9   0  32 c11
   34.5    2.3 2837.8  198.5  0.0  0.4    0.0   11.1   0   5 c11t1d0
   34.5    2.3 2837.9  198.5  0.0  0.4    0.0   11.0   0   5 c11t2d0
   35.8    1.9 2918.3  162.1  0.0  0.5    0.0   12.1   0   5 c11t3d0
   35.9    1.9 2918.2  162.1  0.0  0.5    0.0   11.9   0   5 c11t4d0
   36.2    1.9 2918.5  162.1  0.0  0.4    0.0   11.2   0   5 c11t5d0
   22.1    1.7 1696.8  171.2  0.0  0.2    0.0    9.5   0   3 c11t6d0
   22.2    1.7 1703.1  171.3  0.0  0.2    0.0    9.5   0   3 c11t7d0
   22.3    1.7 1697.1  171.2  0.0  0.2    0.0    9.2   0   3 c11t8d0
    0.0    0.0    1.0    0.3  0.0  0.0    0.5    1.4   0   0 c8d0


# fmadm faulty
--------------- ------------------------------------  --------------
---------
TIME            EVENT-ID                              MSG-ID
SEVERITY
--------------- ------------------------------------  --------------
---------
Nov 06 06:33:53 89ea2588-6dd8-4d72-e3fd-c2a4c4a8dda2  ZFS-8000-FD    Major

Fault class : fault.fs.zfs.vdev.io
Affects     : zfs://pool=uberdisk3/vdev=6cdf461a5ecbe703
                  faulted but still in service
Problem in  : zfs://pool=uberdisk3/vdev=6cdf461a5ecbe703
                  faulty

Description : The number of I/O errors associated with a ZFS device exceeded
                     acceptable levels.  Refer to
http://sun.com/msg/ZFS-8000-FD
              for more information.

Response    : The device has been offlined and marked as faulted.  An
attempt
                     will be made to activate a hot spare if available.

Impact      : Fault tolerance of the pool may be compromised.

Action      : Run 'zpool status -x' and replace the bad device.

--------------- ------------------------------------  --------------
---------
TIME            EVENT-ID                              MSG-ID
SEVERITY
--------------- ------------------------------------  --------------
---------
Nov 06 06:33:25 6ff5d64e-cf64-c2e3-864f-cc59c267c0e8  ZFS-8000-FD    Major

Fault class : fault.fs.zfs.vdev.io
Affects     : zfs://pool=uberdisk1/vdev=655593d0bc77a83d
                  faulted but still in service
Problem in  : zfs://pool=uberdisk1/vdev=655593d0bc77a83d
                  faulty

Description : The number of I/O errors associated with a ZFS device exceeded
                     acceptable levels.  Refer to
http://sun.com/msg/ZFS-8000-FD
              for more information.

Response    : The device has been offlined and marked as faulted.  An
attempt
                     will be made to activate a hot spare if available.

Impact      : Fault tolerance of the pool may be compromised.

Action      : Run 'zpool status -x' and replace the bad device.

--------------- ------------------------------------  --------------
---------
TIME            EVENT-ID                              MSG-ID
SEVERITY
--------------- ------------------------------------  --------------
---------
Nov 06 06:33:20 2c0236bb-53e2-e271-d6af-a21c2f0976aa  ZFS-8000-FD    Major

Fault class : fault.fs.zfs.vdev.io
Affects     : zfs://pool=uberdisk1/vdev=3b0c0e48668e3bf2
                  faulted and taken out of service
Problem in  : zfs://pool=uberdisk1/vdev=3b0c0e48668e3bf2
                  faulty

Description : The number of I/O errors associated with a ZFS device exceeded
                     acceptable levels.  Refer to
http://sun.com/msg/ZFS-8000-FD
              for more information.

Response    : The device has been offlined and marked as faulted.  An
attempt
                     will be made to activate a hot spare if available.

Impact      : Fault tolerance of the pool may be compromised.

Action      : Run 'zpool status -x' and replace the bad device.

--------------- ------------------------------------  --------------
---------
TIME            EVENT-ID                              MSG-ID
SEVERITY
--------------- ------------------------------------  --------------
---------
Nov 06 06:33:23 896d10f1-fa11-69bb-ae78-d18a56fd3288  ZFS-8000-HC    Major

Fault class : fault.fs.zfs.io_failure_wait
Affects     : zfs://pool=uberdisk1
                  faulted but still in service
Problem in  : zfs://pool=uberdisk1
                  faulty

Description : The ZFS pool has experienced currently unrecoverable I/O
                    failures.  Refer to http://sun.com/msg/ZFS-8000-HC for
more
              information.

Response    : No automated response will be taken.

Impact      : Read and write I/Os cannot be serviced.

Action      : Make sure the affected devices are connected, then run
                    'zpool clear'.

--------------- ------------------------------------  --------------
---------
TIME            EVENT-ID                              MSG-ID
SEVERITY
--------------- ------------------------------------  --------------
---------
Nov 06 06:33:30 989d0590-9e27-cd11-cba5-d7dbf7127ce1  ZFS-8000-FD    Major

Fault class : fault.fs.zfs.vdev.io
Affects     : zfs://pool=uberdisk3/vdev=e0209de35309a6f8
                  faulted but still in service
Problem in  : zfs://pool=uberdisk3/vdev=e0209de35309a6f8
                  faulty

Description : The number of I/O errors associated with a ZFS device exceeded
                     acceptable levels.  Refer to
http://sun.com/msg/ZFS-8000-FD
              for more information.

Response    : The device has been offlined and marked as faulted.  An
attempt
                     will be made to activate a hot spare if available.

Impact      : Fault tolerance of the pool may be compromised.

Action      : Run 'zpool status -x' and replace the bad device.

--------------- ------------------------------------  --------------
---------
TIME            EVENT-ID                              MSG-ID
SEVERITY
--------------- ------------------------------------  --------------
---------
Nov 06 06:33:51 a2d736ac-14e9-cbf7-db28-84e25bfd4a3e  ZFS-8000-HC    Major

Fault class : fault.fs.zfs.io_failure_wait
Affects     : zfs://pool=uberdisk3
                  faulted but still in service
Problem in  : zfs://pool=uberdisk3
                  faulty

Description : The ZFS pool has experienced currently unrecoverable I/O
                    failures.  Refer to http://sun.com/msg/ZFS-8000-HC for
more
              information.

Response    : No automated response will be taken.

Impact      : Read and write I/Os cannot be serviced.

Action      : Make sure the affected devices are connected, then run
                    'zpool clear'.

-- 
Dave Pooser, ACSA
Manager of Information Services
Alford Media  http://www.alfordmedia.com


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to