Hi all. 

I've just encountered a SunFire V240 which panics whenever a zpool 
scrub  is done, or whenever two of the filesystems are accessed.

After some  rummaging  around I came across bug report 6537415 from
July this year, which seems to be an exact replica of the panic msgbuf I see. 

I'm wondering if there was a patch or something released for this, or was
it put down  to cosmic radiation? We have a good many systems here on
Solaris 10 6/06 and ZFS, all of which are running nicely except this one, which 
seems to 
have gotten itself into a right old state. 

Thanks for any tips.


Some info: 


# uname -a 
SunOS cashel 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Fire-V240 
# cat /etc/release 
                       Solaris 10 6/06 s10s_u2wos_09a SPARC 
           Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved. 
                        Use is subject to license terms. 
                             Assembled 09 June 2006 
# zpool status 
  pool: apps-storage 
 state: ONLINE 
status: One or more devices has experienced an error resulting in data 
        corruption.  Applications may be affected. 
action: Restore the file in question if possible.  Otherwise restore 
the 
        entire pool from backup. 
   see: http://www.sun.com/msg/ZFS-8000-8A 
 scrub: none requested 
config: 

        NAME        STATE     READ WRITE CKSUM 
        apps-storage  ONLINE       0     0     0 
          c0t0d0s4  ONLINE       0     0     0 
          c0t1d0    ONLINE       0     0     0 
          c0t2d0    ONLINE       0     0     0 
          c0t3d0    ONLINE       0     0     0 

errors: 0 data errors, use '-v' for a list 
# zpool list 
NAME                    SIZE    USED   AVAIL    CAP  HEALTH 
ALTROOT 
apps-storage            254G   5.69G    248G     2%  ONLINE     - 
# zfs list 
NAME                   USED  AVAIL  REFER  MOUNTPOINT 
apps-storage          5.69G   244G  24.5K  /apps-storage 
apps-storage/appl     5.66G   244G  5.66G  /appl 
apps-storage/cache    24.5K   244G  24.5K  /data/cache 
apps-storage/data     30.5K   244G  30.5K  /data 
apps-storage/download1  24.5K   244G  24.5K  /data/download1 
apps-storage/download2  24.5K   244G  24.5K  /data/download2 
apps-storage/home     27.5M   244G  27.5M  /export/home 
apps-storage/oradata01  24.5K   244G  24.5K  /oradata01 
apps-storage/oradata02  24.5K   244G  24.5K  /oradata02 
apps-storage/oradata03  24.5K   244G  24.5K  /oradata03 
apps-storage/oradata04  24.5K   244G  24.5K  /oradata04 
apps-storage/oradump  24.5K   244G  24.5K  /oradump 
apps-storage/oralogs1  24.5K   244G  24.5K  /oralogs1 
apps-storage/oralogs2  24.5K   244G  24.5K  /oralogs2 
apps-storage/trace_archive1  24.5K   244G  24.5K  /data/trace_archive1 
apps-storage/trace_log1  24.5K   244G  24.5K  /data/trace_log1 
# 


<Some highlights from a rather lengthy zpool status -v:> 


errors: The following persistent errors have been detected: 

          DATASET            OBJECT  RANGE 
          mos                116     4096-8192 
          17                 20      lvl=0 blkid=0 
          17                 23      lvl=0 blkid=0 
          17                 36      lvl=0 blkid=0 
          .. 
          .. 
          apps-storage/appl  846     0-512 
          apps-storage/appl  848     0-512 
          apps-storage/appl  850     0-512 
          apps-storage/appl  866     0-131072 
          .. 
          .. 
          apps-storage/home  216     131072-262144 
          apps-storage/home  216     262144-393216 
          apps-storage/home  217     0-131072 


<stack traceback and registers:> 


# pwd 
/var/crash/cashel 
# ls 
bounds    unix.0    vmcore.0 
# adb -P "adb: " -k ./unix.0 ./vmcore.0 
physmem fe547 
adb: $C 
000002a100a0e521 vpanic(11eb430, 7bb701a0, 5, 7bb701e0, 0, 7bb701e8) 
000002a100a0e5d1 assfail3+0x94(7bb701a0, 5, 7bb701e0, 0, 7bb701e8, 
133) 
000002a100a0e691 space_map_load+0x1a4(600034903b8, 6000b356000, 1000, 
60003490088, 40000000, 1) 
000002a100a0e761 metaslab_activate+0x3c(60003490080, 8000000000000000, 
c000000000000000, 7f0eafc4, 
60003490080, c0000000) 
000002a100a0e811 metaslab_group_alloc+0x1c0(3fffffffffffffff, 600, 
8000000000000000, 222d50000, 
60003459240, ffffffffffffffff) 
000002a100a0e8f1 metaslab_alloc_dva+0x114(0, 222d50000, 60003459240, 
600, 60001238b00, 24cbaf) 
000002a100a0e9c1 metaslab_alloc+0x2c(0, 600, 60003459240, 3, 24cbaf, 
0) 
000002a100a0ea71 zio_dva_allocate+0x4c(6000b119d40, 7bb537ac, 
60003459240, 703584a0, 70358400, 20001 
) 
000002a100a0eb21 zio_write_compress+0x1ec(6000b119d40, 23e20b, 23e000, 
1f001f, 3, 60003459240) 
000002a100a0ebf1 arc_write+0xe4(6000b119d40, 6000131ad80, 7, 3, 3, 
24cbaf) 
000002a100a0ed01 dbuf_sync+0x6d8(6000393f630, 6000afb2ac0, 119, 3, 7, 
24cbaf) 
000002a100a0ee21 dnode_sync+0x35c(1, 1, 6000afb2ac0, 60001349c40, 0, 
2) 
000002a100a0eee1 dmu_objset_sync_dnodes+0x6c(60001a86f80, 60001a870c0, 
60001349c40, 600035c4310, 
600032b5be0, 0) 
000002a100a0ef91 dmu_objset_sync+0x54(60001a86f80, 60001349c40, 3, 3, 
60004d3ef38, 24cbaf) 
000002a100a0f0a1 dsl_pool_sync+0xc4(300000ad540, 60001a87060, 
60001a870e0, 3, 60001a86f80, 
60001a86fa8) 
000002a100a0f151 spa_sync+0xe4(6000131ad80, 24cbaf, 60001a86fa8, 
60001349c40, 6000131aef8, 
2a100a0fcbc) 
000002a100a0f201 txg_sync_thread+0x134(300000ad540, 24cbaf, 0, 
2a100a0fab0, 300000ad650, 300000ad652 
) 
000002a100a0f2d1 thread_start+4(300000ad540, 0, 0, 0, 0, 0) 
adb: $r 
%g0 = 0x0000000000000000                 %l0 = 0x000000007bb68000 
dmu_ot+0x198 
%g1 = 0x000000007bb70000                 %l1 = 0x000006000122b008 
%g2 = 0x000000000000000b                 %l2 = 0x0000000000000001 
%g3 = 0x000000000000000b                 %l3 = 0x0000000000000001 
%g4 = 0x0000030001b34d40                 %l4 = 0x000006000b687280 
%g5 = 0x000000000000000c                 %l5 = 0x0000000000000012 
%g6 = 0x0000000000000016                 %l6 = 0x0000000000000001 
%g7 = 0x000002a100a0fcc0                 %l7 = 0x0000000000000001 

%o0 = 0x00000000011eb430                 %i0 = 0x00000000011eb430 
%o1 = 0x000002a100a0ee58                 %i1 = 0x000000007bb701a0 
%o2 = 0x0000000000000000                 %i2 = 0x0000000000000005 
%o3 = 0x0000030001b34d48                 %i3 = 0x000000007bb701e0 
%o4 = 0x000000000180c000            cpu0 %i4 = 0x0000000000000000 
%o5 = 0x0000000000000001                 %i5 = 0x000000007bb701e8 
%o6 = 0x000002a100a0e521                 %i6 = 0x000002a100a0e5d1 
%o7 = 0x000000000105e788      panic+0x1c %i7 = 0x000000000112d9c0 
assfail3+0x94 

 %ccr = 0x44 xcc=nZvc icc=nZvc 
%fprs = 0x00 fef=0 du=0 dl=0 
 %asi = 0x80 
   %y = 0x0000000000000000 
  %pc = 0x000000000104244c vpanic 
 %npc = 0x0000000001042450 vpanic+4 
  %sp = 0x000002a100a0e521 unbiased=0x000002a100a0ed20 
  %fp = 0x000002a100a0e5d1 

  %tick = 0x0000000000000000 
   %tba = 0x0000000000000000 
    %tt = 0x0 
    %tl = 0x0 
   %pil = 0x0 
%pstate = 0x016 cle=0 tle=0 mm=TSO red=0 pef=1 am=0 priv=1 ie=1 ag=0 

       %cwp = 0x03  %cansave = 0x00 
%canrestore = 0x00 %otherwin = 0x00 
    %wstate = 0x00 %cleanwin = 0x00 
adb: $q 

# 



-- 
Noel N.


      ___________________________________________________________ 
Yahoo! Mail is the world's favourite email. Don't settle for less, sign up for
your free account today 
http://uk.rd.yahoo.com/evt=44106/*http://uk.docs.yahoo.com/mail/winter07.html 
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to