Hi Giovanni,

I have seen these while testing the mpt timeout issue, and on other systems 
during resilvering of failed disks and while running a scrub.

Once so far on this test scrub, and several on yesterdays.

I checked the iostat errors, and they weren't that high on that device, 
compared to other disks.

c2t34d0  ONLINE       0     0     1  25.5K repaired

 ---- errors ---
  s/w h/w trn tot device
  0   8  61  69 c2t30d0
  0   2  17  19 c2t31d0
  0   5  41  46 c2t32d0
  0   5  33  38 c2t33d0
  0   3  31  34 c2t34d0 <<<<<<
  0  10  81  91 c2t35d0
  0   4  22  26 c2t36d0
  0   6  44  50 c2t37d0
  0   3  21  24 c2t38d0
  0   5  49  54 c2t39d0
  0   9  77  86 c2t40d0
  0   6  58  64 c2t41d0
  0   5  50  55 c2t42d0
  0   4  34  38 c2t43d0
  0   6  37  43 c2t44d0
  0   9  75  84 c2t45d0
  0  13  82  95 c2t46d0
  0   7  57  64 c2t47d0
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to