On 25/11/2017 19:57, Warner Losh wrote: > Let's walk through this. You see that it takes a long time to fail an I/O. > Perfectly reasonable observation. There's two reasons for this. One is that > the > disks take a while to make an attempt to get the data. The second is that the > system has a global policy that's biased towards 'recover the data' over 'fail > fast'. These can be fixed by reducing the timeouts, or lowing the read-retry > count for a given drive or globally as a policy decision made by the system > administrator. > > It may be perfectly reasonable to ask the lower layers to 'fail fast' and have > either a hard or a soft deadline on the I/O for a subset of I/O. A hard > deadline > would return ETIMEDOUT or something when it's passed and cancel the I/O. This > gives better determinism in the system, but some systems can't cancel just 1 > I/O > (like SATA drives), so we have to flush the whole queue. If we get a lot of > these, performance suffers. However, for some class of drives, you know that > if > it doesn't succeed in 1s after you submit it to the drive, it's unlikely to > complete successfully and it's worth the performance hit on a drive that's > already acting up. > > You could have a soft timeout, which says 'don't do any additional action > after > X time has elapsed and you get word about this I/O. This is similar to the > hard > timeout, but just stops retrying after the deadline has passed. This scenario > is > better on the other users of the drive, assuming that the read-recovery > operations aren't starving them. It's also easier to implement, but has worse > worst case performance characteristics. > > You aren't asking to limit retries. You're really asking to the I/O subsystem > to > limit, where it can, the amount of time on an I/O so you can try another one. > You're means to doing this is to tell it not to retry. That's the wrong means. > It shouldn't be listed in the API that it's a 'NO RETRY' request. It should > be a > QoS request flag: fail fast.
I completely agree. 'NO RETRY' was a bad name and now I see it with painful clarity. Just to clarify, I agree not only on the name, but also on everything else you said above. > Part of why I'm being so difficult is that you don't understand this and are > proposing a horrible API. It should have a different name. I completely agree. > The other reason is > that I absolutely do not want to overload EIO. You must return a different > error back up the stack. You've show no interest in this past, which is also a > needless argument. We've given good reasons, and you've poopooed them with bad > arguments. I still honestly don't understand this. I think that bio_error and bio_flags are sufficient to properly interpret the "fail-fast EIO". And I never intended for that error to be ever propagated by any means other than in bio_error. > Also, this isn't the data I asked for. I know things can fail slowly. I was > asking for how it would improve systems running like this. As in "I > implemented > it, and was able to fail over to this other drive faster" or something like > that. Actual drive failure scenarios vary widely, and optimizing for this one > failure is unwise. It may be the right optimization, but it may not. There's > lots of tricky edges in this space. Well, I implemented my quick hack (as you absolutely correctly characterized it) in response to something that I observed happening in the past and that hasn't happen to me since then. But, realistically, I do not expect myself to be able to reproduce and test every tricky failure scenario. -- Andriy Gapon _______________________________________________ freebsd-geom@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-geom To unsubscribe, send any mail to "freebsd-geom-unsubscr...@freebsd.org"