Hi Ceri,

I just saw your mail today. I'm replying In case you haven't found a solution.

This is

6475304 zfs core dumps when trying to create new spool using "did" device

The workaround suggests:

Set environmental variable

NOINUSE_CHECK=1

And the problem does not exists.

Thanks,
Zoram


Ceri Davies wrote:
On an up to date Solaris 10 11/06 with Sun Cluster 3.2 and iSCSI backed
did devices, zpool dumps core on creation if I try to use a did device.

Using the underlying device works, and this might not be supported
(though I don't know), but I thought you would probably prefer to see
the error than not (this is just a test set up and therefore we don't
have support for it).

  bash-3.00# scdidadm -l
1 peon:/dev/rdsk/c0t1d0 /dev/did/rdsk/d1 2 peon:/dev/rdsk/c0t0d0 /dev/did/rdsk/d2 3 peon:/dev/rdsk/c0t2d0 /dev/did/rdsk/d3 6 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B69d0 /dev/did/rdsk/d6 7 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B6Ed0 /dev/did/rdsk/d7 8 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B88d0 /dev/did/rdsk/d8 9 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B85d0 /dev/did/rdsk/d9 10 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B83d0 /dev/did/rdsk/d10 11 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B86d0 /dev/did/rdsk/d11 12 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B87d0 /dev/did/rdsk/d12 13 peon:/dev/rdsk/c1t0100000CF1F459EE00002A0045AF6B84d0 /dev/did/rdsk/d13 bash-3.00# zpool create wibble /dev/did/dsk/d12
  free(fe726420): invalid or corrupted buffer
  stack trace:
  libumem.so.1'?? (0xff24b460)
  libCrun.so.1'__1c2k6Fpv_v_+0x4
  
libCstd_isa.so.1'__1cDstdMbasic_string4Ccn0ALchar_traits4Cc__n0AJallocator4Cc___2G6Mrk1_r1_+0xb8
  
libCstd.so.1'__1cH__rwstdNlocale_vector4nDstdMbasic_string4Ccn0BLchar_traits4Cc__n0BJallocator4Cc_____Gresize6MIn0E__p3_+0xc4
  libCstd.so.1'__1cH__rwstdKlocale_imp2t5B6MII_v_+0xc4
  libCstd.so.1'__1cDstdGlocaleEinit6F_v_+0x44
  
libCstd.so.1'__1cDstdNbasic_istream4Cwn0ALchar_traits4Cw___2t6Mn0AIios_baseJEmptyCtor__v_+0x84
  libCstd.so.1'?? (0xfe57b2b8)
  libCstd.so.1'?? (0xfe57b994)
  libCstd.so.1'_init+0x1e0
  ld.so.1'?? (0xff3bfea8)
  ld.so.1'?? (0xff3cca04)
  ld.so.1'_elf_rtbndr+0x10
  libCrun.so.1'?? (0xfe46a93c)
  libCrun.so.1'__1cH__CimplKcplus_init6F_v_+0x48
  libCstd_isa.so.1'_init+0xc8
  ld.so.1'?? (0xff3bfea8)
  ld.so.1'?? (0xff3c5318)
  ld.so.1'?? (0xff3c5474)
  ld.so.1'dlopen+0x64
  libmeta.so.1'sdssc_bind_library+0x88
  libdiskmgt.so.1'?? (0xff2b092c)
  libdiskmgt.so.1'?? (0xff2aa6b4)
  libdiskmgt.so.1'?? (0xff2aa42c)
  libdiskmgt.so.1'dm_get_stats+0x12c
  libdiskmgt.so.1'dm_get_slice_stats+0x44
  libdiskmgt.so.1'dm_inuse+0x74
  zpool'check_slice+0x20
  zpool'check_disk+0x144
  zpool'check_device+0x4c
  zpool'check_in_use+0x108
  zpool'check_in_use+0x174
  zpool'make_root_vdev+0x3c
  zpool'?? (0x1321c)
  zpool'main+0x130
  zpool'_start+0x108
  Abort (core dumped)

Ceri


------------------------------------------------------------------------

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Zoram Thanga::Sun Cluster Development::http://blogs.sun.com/zoram
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to