Trond Myklebust <[EMAIL PROTECTED]> wrote: > No. If the write fails, then NFS will mark the mapping as invalid and > attempt to call invalidate_inode_pages2() at the earliest possible > moment.
Will it erase *all* unwritten writes? Or is that what launder_page() is for? How do you deal with pages that were in the process of being written out when that particular write was rejected? Do you just summarily clear PG_writeback and hope no-one else looks at the page until invalidate_inode_pages2() gets around to excising it? Or do you have a better way? > I'm adding in a patch to defer marking the page as uptodate until the > write is successful in cases where NFS is writing a pristine page. That sounds reasonable, though it doesn't help in the case I'm looking at. Do you also munge i_size if the write fails? > As for pages that are already marked as uptodate, well you already have > a race: you have deferred the page write, and so other processes may > already have read the rejected data before you tried to write it out. Yeah, I know, and that's very difficult to deal with without some formal transaction rollback mechanism. I think that the best I can do is to discard the dodgy data that I've got lurking in the pagecache, but I still have to deal with writes made by other users to that file after the rejected write. There isn't a perfect way of dealing with it, given the circumstances. David - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/