On 7/31/07, Eric Sandeen <[EMAIL PROTECTED]> wrote: > No, what I had did only that, so it was still a matter of probabilities...
How expensive would it be to allocate two , then use the MMU mark the second page unwritable? Hardware wise it should be possible, (for constant 4k pagesizes, I have not worked with variable pagesize MMUs) and since it's a per-context-switch constant operation, it would be a special case in the fault handler rather then adding another entry to the VM for every process. Using large hardware pages to cover the kernel mapping could be worked around by leaving the area where the current process stack resides mapped via 4k pages. Of course, I haven't touched a modern PC MMU in ages, so I could be missing something fundamentally difficult. The other issue is with the layered IO design - no matter what we configure the stack size to, it is still possible to create a set of translation layers that will cause it to crash regularly: XFS on dm_crypt on loop on XFS on dm_crypt on loop on ad infinitum. That said, I'm missing something here - why is the stack growing? Filesystems should be issuing bios with callbacks, so they should be back off the stack, same with dm, loop, etc. Am I missing step where they use a wrapper function that pretends to be syncronous? - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/