>>>>> "th" == Tim Haley <tim.ha...@sun.com> writes:

    th> The second is marked as a duplicate of 6784395, fixed in
    th> snv_107, 20 weeks ago.

Yeah nice sleuthing. :/

I understood Bogdan's post was a trap: ``provide bug numbers.  Oh,
they're fixed?  nothing to see here then.  no bugs?  nothing to see
here then.''  But think about it.  Does this mean ZFS was not broken
before those bugs were filed?  It does not.  now, extrapolate: imagine
looking back on this day from the future.

In the next line of that post right below where I give the bug
numbers, I provide context explaining why I still think there's a
problem.

Also, as I said elsewhere, there's a barrier controlled by Sun to
getting bugs accepted.  This is a useful barrier: the bug database is
a more useful drive toward improvement if it's not cluttered.  It also
means, like I said, sometimes the mailing list is a more useful place
for information.

HTH.

I think a better question would be: what kind of tests would be most
promising for turning some subclass of these lost pools reported on
the mailing list into an actionable bug?

my first bet would be writing tools that test for ignored sync cache
commands leading to lost writes, and apply them to the case when iSCSI
targets are rebooted but the initiator isn't.

I think in the process of writing the tool you'll immediately bump
into a defect, because you'll realize there is no equivalent of a
'hard' iSCSI mount like there is in NFS.  and there cannot be a strict
equivalent to 'hard' mounts in iSCSI, because we want zpool redundancy
to preserve availability when an iSCSI target goes away.  I think the
whole model is wrong somehow.

Attachment: pgpV1hu4gJbvT.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to