Hi,

We were hit by this bug as well on Solaris 11 (2+ months ago). Our only options were to import the pool read-only and transfer the data off to another system or restore from backups.

Oracle told us that the bug is caused by a race condition within read/re-write operations on the same block. There is a small window of opportunity for the in-memory data (in the ARC) for an individual data block to become corrupt when being re-written while a read request is in-flight for the same block and the pass is greater than 1.

Assuming the re-write operation completes first, the read operation overwrites the in-memory copy using the older/stale on-disk data (corruption). If the read is completed before the re-write no corruption is seen. It's a very specific set of circumstances needed to reproduce the issue. The reason why metaslabs are more commonly affected is due to the fact they're re-written within the same birthtime more frequently than any other object.

Solaris 11.1 has a new feature (ZIO Join) that allows multiple read requests for the same data block to issue just 1 IO instead of individual IOs for each request. The bug still exists in S11.1 but the new code reduces the window of opportunity for this bug to almost zero. The complete bug fix has already been implemented in Solaris 12 and is currently being tested in Solaris 11.2 and S10u11. From there it will be put into an SRU for S11.1 (I assume S11.0 as well).

I followed up with Oracle today and was told that their investigation uncovered that rewrite may inherit a previous copy of a metadata block cached in L2ARC. As soon as rewritten block is evicted from ARC, the next read will fetch a stale inherited copy from L2ARC. So not using L2ARC or CACHE devices sounds like a good idea to me!

Hopefully this nasty bug is fixed soon :(

Thanks,

Josh Simon

On 12/12/2012 1:21 PM, Jamie Krier wrote:
I've hit this bug on four of my Solaris 11 servers. Looking for anyone
else who has seen it, as well as comments/speculation on cause.

This bug is pretty bad.  If you are lucky you can import the pool
read-only and migrate it elsewhere.

I've also tried setting zfs:zfs_recover=1,aok=1 with varying results.


http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc


Hardware platform:

Supermicro X8DAH

144GB ram

Supermicro sas2 jbods

LSI 9200-8e controllers (Phase 13 fw)

Zuesram log

ZuesIops sas l2arc

Seagate ST33000650SS sas drives


All four servers are running the same hardware, so at first I suspected
a problem there.  I opened a ticket with Oracle which ended with this email:

---------------------------------------------------------------------------------------------------------------------------------

We strongly expect that this is a software issue because this problem
does not happen

on Solaris 10.   On Solaris 11, it happens with both the SPARC and the
X64 versions of

Solaris.


We have quite a few customer who have seen this issue and we are in the
process of

working on a fix.  Because we do not know the source of the problem yet,
I cannot speculate

on the time to fix.  This particular portion of Solaris 11 (the virtual
memory sub-system) is quite

different than in Solaris 10.  We re-wrote the memory management in
order to get ready for

systems with much more memory than Solaris 10 was designed to handle.


Because this is the memory management system, there is not expected to
be any

work-around.


Depending on your company's requirements, one possibility is to use
Solaris 10 until this

issue is resolved.


I apologize for any inconvenience that  this bug may cause.  We are
working on it as a Sev 1 Priority1 in sustaining engineering.

---------------------------------------------------------------------------------------------------------------------------------


I am thinking about switching to an Illumos distro, but wondering if
this problem may be present there as well.


Thanks


- Jamie



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to