> On Thu, 14 Jun 2007 14:37:33 -0700 (PDT) Christoph Lameter <[EMAIL > PROTECTED]> wrote: > On Thu, 14 Jun 2007, Andrew Morton wrote: > > > We want the 100% case. > > Yes that is what we intend to do. Universal support for larger blocksize. > I.e. your desktop filesystem will use 64k page size and server platforms > likely much larger.
With 64k pagesize the amount of memory required to hold a kernel tree (say) will go from 270MB to 1400MB. This is not an optimisation. Several 64k pagesize people have already spent time looking at various tail-packing schemes to get around this serious problem. And that's on _server_ class machines. Large ones. I don't think laptop/desktop/samll-server machines would want to go anywhere near this. > fsck times etc etc are becoming an issue for desktop > systems I don't see what fsck has to do with it. fsck is single-threaded (hence no locking issues) and operates against the blockdev pagecache and does a _lot_ of small reads (indirect blocks, especially). If the memory consumption for each 4k read jumps to 64k, fsck is likely to slow down due to performing a lot more additional IO and due to entering page reclaim much earlier. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/