> Am 24.1.2007 14:59 Uhr, Dennis Clarke schrieb:
>
>>> Jan 23 17:25:26 newponit genunix: [ID 408822 kern.info] NOTICE: glm0:
>>> fault detected in device; service still available
>>> Jan 23 17:25:26 newponit genunix: [ID 611667 kern.info] NOTICE: glm0:
>>> Disconnected tagged cmd(s) (1) timeout for Target 0.0
>>
>>   NCR scsi controllers .. what OS revision is this ?   Solaris 10 u 3 ?
>>
>>   Solaris Nevada snv_55b ?
>
> [EMAIL PROTECTED] # cat /etc/release
>                        Solaris 10 11/06 s10s_u3wos_10 SPARC
>            Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
>                         Use is subject to license terms.
>                            Assembled 14 November 2006
> [EMAIL PROTECTED] # uname -a
> SunOS newponit 5.10 Generic_118833-33 sun4u sparc SUNW,Ultra-60
>

   oh dear.

   that's not Solaris Nevada at all.  That is production Solaris 10.

>>> SVM and ZFS disks are on a seperate SCSI bus, so theoretically there
>>> should be any impact on the SVM disks when I pull out a ZFS disk.
>>
>>   I still feel that you hit a bug in ZFS somewhere.  Under no
>> circumstances
>> should a Solaris server panic and crash simply because you pulled out a
>> single disk that was totally mirrored.  In fact .. I will reproduce those
>> conditions here and then see what happens for me.
>
> And Solaris should not hang at all.

  I agree.  We both know this.  You just recently patched a blastwave server
that was running for over 700 days in production and *this* sort of
behavior just does not happen in Solaris.

  Let me see if I can reproduce your config here :

bash-3.2# metastat -p
d0 -m /dev/md/rdsk/d10 /dev/md/rdsk/d20 1
d10 1 1 /dev/rdsk/c0t1d0s0
d20 1 1 /dev/rdsk/c0t0d0s0
d1 -m /dev/md/rdsk/d11 1
d11 1 1 /dev/rdsk/c0t1d0s1
d4 -m /dev/md/rdsk/d14 1
d14 1 1 /dev/rdsk/c0t1d0s7
d5 -m /dev/md/rdsk/d15 1
d15 1 1 /dev/rdsk/c0t1d0s5
d21 1 1 /dev/rdsk/c0t0d0s1
d24 1 1 /dev/rdsk/c0t0d0s7
d25 1 1 /dev/rdsk/c0t0d0s5

bash-3.2# metadb
        flags           first blk       block count
     a m  p  luo        16              8192            /dev/dsk/c0t0d0s4
     a    p  luo        8208            8192            /dev/dsk/c0t0d0s4
     a    p  luo        16              8192            /dev/dsk/c0t1d0s4
     a    p  luo        8208            8192            /dev/dsk/c0t1d0s4

bash-3.2# zpool status -v zfs0
  pool: zfs0
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zfs0        ONLINE       0     0     0
          c1t9d0    ONLINE       0     0     0
          c1t10d0   ONLINE       0     0     0
          c1t11d0   ONLINE       0     0     0
          c1t12d0   ONLINE       0     0     0
          c1t13d0   ONLINE       0     0     0
          c1t14d0   ONLINE       0     0     0

errors: No known data errors
bash-3.2#

I will add in mirrors to that zpool from another array on another controller
and then yank a disk.  However this machine is on snv_52 at the moment.

Dennis

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to