Under Solaris 10 u6 ,  No matter how I configured my ARECA 1261ML Raid card
I got errors on all drives that result from SCSI timeout errors. 

yoda:~ # tail -f /var/adm/messages
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]      
Requested Block: 239683776                 Error Block: 239683776
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]      Vendor: 
Seagate                            Serial Number:            
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]      Sense 
Key: Not Ready
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]      ASC: 0x4 
(LUN is becoming ready), ASCQ: 0x1, FRU: 0x0
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.warning] WARNING: 
/p...@0,0/pci10de,3...@17/pci17d3,1...@0/s...@c,0 (sd14):
Jan  9 11:03:47 yoda.asc.edu    Error for Command: 
write(10)               Error Level: Retryable
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]      
Requested Block: 239683776                 Error Block: 239683776
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]      Vendor: 
Seagate                            Serial Number:            
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]      Sense 
Key: Not Ready
Jan  9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice]      ASC: 0x4 
(LUN is becoming ready), ASCQ: 0x1, FRU: 0x0

zfs eventually would degrade the drives due to the errors.   I'm positive that 
there is nothing wrong with my hardware.

Here is the driver I used under Solaris 10 u6.
ftp://ftp.areca.com.tw/RaidCards/AP_Drivers/Solaris/DRIVER/1.20.00.16-80731/readme.txt

I got these errors using either JBOD or configuring the Drives as
pass-through.

I turned off NCQ and Tagged Queuing and still got errors.

yoda:~/bin # zpool status
 pool: backup
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
       attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
       using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: resilver completed after 1h7m with 0 errors on Fri Jan  9 
09:57:46 2009
config:

       NAME         STATE     READ WRITE CKSUM
       backup       ONLINE       0     0     0
         raidz1     ONLINE       0     0     0
           c2t2d0   ONLINE       0     5     0
           c2t3d0   ONLINE       0     1     0
           c2t4d0   ONLINE       0     1     0
           c2t5d0   ONLINE       0     2     0
           c2t6d0   ONLINE       0     2     0
           c2t7d0   ONLINE       0     2     0
           c2t8d0   ONLINE       0     3     0
         raidz1     ONLINE       0     0     0
           c2t9d0   ONLINE       0     2     0
           c2t10d0  ONLINE       0     2     0
           c2t11d0  ONLINE       0     3     0
           c2t12d0  ONLINE       0     3     0
           c2t13d0  ONLINE       0     3     0
           c2t14d0  ONLINE       0     2     0
           c2t15d0  ONLINE       0    51     0

errors: No known data errors

 pool: rpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
       attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
       using 'zpool clear' or replace the device with 'zpool replace'.
  see: http://www.sun.com/msg/ZFS-8000-9P
scrub: none requested
config:

       NAME          STATE     READ WRITE CKSUM
       rpool         ONLINE       0     0     0
         mirror      ONLINE       0     0     0
           c2t0d0s0  ONLINE       0     5     0
           c2t1d0s0  ONLINE       3     2     0

errors: No known data errors

Under opensolaris, I don't get the SCSI timeout errors but I do get error 
messages like this:

Jan 13 09:30:39 yoda last message repeated 5745 times
Jan 13 09:30:39 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (255 > 256)
Jan 13 09:30:39 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (256 > 256)
Jan 13 09:30:49 yoda last message repeated 2938 times
Jan 13 09:30:49 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (254 > 256)
Jan 13 09:30:49 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (256 > 256)
Jan 13 09:30:53 yoda last message repeated 231 times
Jan 13 09:30:53 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (257 > 256)
Jan 13 09:30:53 yoda last message repeated 2 times
Jan 13 09:30:53 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (256 > 256)
Jan 13 09:31:11 yoda last message repeated 1191 times
Jan 13 09:31:11 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (255 > 256)
Jan 13 09:31:11 yoda arcmsr: [ID 659062 kern.notice] arcmsr0: too many 
outstanding commands (256 > 256)

Fortunately it looks like zpool status is not effected under opensolaris

r...@yoda:~/bin# zpool status
  pool: backup
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        backup       ONLINE       0     0     0
          raidz1     ONLINE       0     0     0
            c4t2d0   ONLINE       0     0     0
            c4t3d0   ONLINE       0     0     0
            c4t4d0   ONLINE       0     0     0
            c4t5d0   ONLINE       0     0     0
            c4t6d0   ONLINE       0     0     0
            c4t7d0   ONLINE       0     0     0
            c4t8d0   ONLINE       0     0     0
          raidz1     ONLINE       0     0     0
            c4t9d0   ONLINE       0     0     0
            c4t10d0  ONLINE       0     0     0
            c4t11d0  ONLINE       0     0     0
            c4t12d0  ONLINE       0     0     0
            c4t13d0  ONLINE       0     0     0
            c4t14d0  ONLINE       0     0     0
            c4t15d0  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c4t0d0s0  ONLINE       0     0     0
            c4t1d0s0  ONLINE       0     0     2

errors: No known data errors

This is the version of the ARECA driver that comes with Opensolaris.

r...@yoda:~/bin# pkginfo -l SUNWarcmsr
   PKGINST:  SUNWarcmsr
      NAME:  Areca SAS/SATA RAID driver
  CATEGORY:  system
      ARCH:  i386
   VERSION:  11.11,REV=2008.10.30.20.37
    VENDOR:  Sun Microsystems, Inc.
      DESC:  SAS/SATA RAID driver
   HOTLINE:  Please contact your local service provider
    STATUS:  completely installed

How do I find out who maintains that and how it compares with the Driver I 
downloaded from ARECA directly?

Thanks
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to