Hi Folks, On 10/12/2007, at 12:22 AM, Edward Irvine wrote:
> Hi Folks, > > I've got a 3.9 Tb zpool, and it is casing kernel panics on my > Solaris 10 280r (SPARC) server. > > The message I get on panic is this: > > panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment > (offset=423713792 size=1024) > > This seems to come about when the zpool is being used or being > scrubbed - about twice a day at the moment. After the reboot, the > scrub seems to have been forgotten about - I can't get a zpool > scrub to complete. > > Any suggestions very much appreciated... > > --- snip --- > > $ zpool status zpool1 > pool: zpool1 > state: ONLINE > scrub: none requested > config: > > NAME STATE READ > WRITE CKSUM > zpool1 ONLINE > 0 0 0 > c7t600C0FFxxxx0000000000xxxxB44BCE6BB00d0s2 ONLINE > 0 0 0 > c7t600C0FF0000000000xxxxB44BCE6BB01d0s2 ONLINE > 0 0 0 > c7t600C0FF0000000000xxxxB44BCE6BB02d0s0 ONLINE > 0 0 0 > c7t600C0FF0000000000xxxxB0BD10ACD00d0s3 ONLINE > 0 0 0 > c7t600C0FF0000000000xxxxB03D27D7100d0s0 ONLINE > 0 0 0 > > errors: No known data error > > $ uname -a > SunOS servername 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-280R > > ---- snip ---- > > Eddie > > Each time the system crashes, it crashes with the same error message. This suggests to me that it is zpool corruption rather than faulty RAM, which is to blame. So - is this particular zpool a lost cause? :\ A number of folks have pointed out that this bug may have been fixed in a very recent version (nv-77?) of opensolaris. As a last ditch approach, I'm thinking that I could put the current system disks (sol10u4) aside, do a quick install the latest opensolaris, import the zpool, and do a zpool scrub, export the zpool, shutdown, swap in the sol10u4 disks, reboot, import. Sigh. Does this approach sound plausible? Eddie _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss