Hi Ben,

 

> The drive (c7t2d0)is bad and should be replaced. 

> The second drive (c7t5d0) is either bad or going bad. 


Dagnabbit. I'm glad you told me this, but I would have thought that running a 
scrub would have alerted me to some fault?

 

> and as soon as you drop the bad disks things magicly return to
> normal.

 

Being a raidz, is it OK for me to actually do zpool offline for one drive 
without degrading the entire pool?

 

I'm wondering whether I should keep using the WD10EADS or ask the business to 
invest in the black versions. I was thinking of the WD1002FAEX (which is 
SATA-III but my cards only do SATA-II) which seems to be better accomodated for 
NAS's. What are other peoples thoughts on this?

 

Here's my current layout - 1,2 & 3 are 320Gb drives.


       0. c0t1d0 <ATA-WDC WD10EADS-00P-0A01-931.51GB>
          /p...@0,0/pci1002,5...@4/pci1458,b...@0/d...@1,0
       4. c7t1d0 <ATA-WDC WD10EADS-00L-1A01-931.51GB>
          /p...@0,0/pci1458,b...@11/d...@1,0
       5. c7t2d0 <ATA-WDC WD10EADS-00P-0A01-931.51GB>
          /p...@0,0/pci1458,b...@11/d...@2,0
       6. c7t3d0 <ATA-WDC WD10EADS-00P-0A01-931.51GB>
          /p...@0,0/pci1458,b...@11/d...@3,0
       7. c7t4d0 <ATA-WDC WD10EADS-00P-0A01-931.51GB>
          /p...@0,0/pci1458,b...@11/d...@4,0
       8. c7t5d0 <ATA-WDC WD10EADS-00P-0A01-931.51GB>
          /p...@0,0/pci1458,b...@11/d...@5,0


The other thing I was thinking of redoing the way the pool was setup, instead 
of a straight raidz layout, adopting a stripe and mirror? so 3 disks in RAID-0, 
then mirro them to the other three?

 

> http://www.cuddletech.com/blog/pivot/entry.php?id=993

 

Great blog entry! Unfortunately the SUNWhd package isn't available in the repo 
and I haven't been able to locate a similar SMART reader :( But your 
explanations are very valuable.

 

> In my experience the only other reason you'll legitimately see really
> wierd "bottoming out" of IO like this is if you hit the max conncurrent
> IO limits in ZFS (untill recently that limit was 35), so you'd see
> actv=35, and then when the device finally processed the IO's the thing
> would snap back to life. But even in those cases you shouldn't see
> request times (asvc_t) rise above 200ms.


Hmmm, I did remember another admin tweaking the zfs configuration. Are these to 
blame by chance:

 

/etc/system

set pcplusmp:apic_intr_policy=1
set zfs:zfs_txg_synctime=1
set zfs:zfs_vdev_max_pending=10

 

I've tried to avoid tweaking anything in the ZFS configuration in fear it may 
give worse performance.

 

> All that to say, replace those disks or at least test it. SSD's won't
> help, one or more drives are toast.

 

Thanks mate, I really appreciate some backing about this :-)

 

Cheers,

Em
                                          
_________________________________________________________________
Need a new place to live? Find it on Domain.com.au
http://clk.atdmt.com/NMN/go/157631292/direct/01/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to