On Fri, 2 Mar 2007, Nick Piggin wrote: > > If there are billions of pages in the system and we are allocating and > > deallocating then pages need to be aged. If there are just few pages > > freeable then we run into issues. > > page writeout and vmscan don't work too badly. What are the issues?
Slow downs up to livelocks with large memory configurations. > So what problems that you commonly see now? Some of us here don't > have 4TB of memory, so you actually have to tell us ;) Oh just run a 32GB SMP system with sparsely freeable pages and lots of allocs and frees and you will see it too. F.e try Linus tree and mlock a large portion of the memory and then see the fun starting. See also Rik's list of pathological cases on this. > How did you come up with that 2MB number? Huge page size. The only basic choice on x86_64 > Anyway, we have hugetlbfs for things like that. Good to know that direct io works. > > I am not the first one.... See Rik's posts regarding the reasons for his > > new page replacement algorithms. > > Different issue, isn't it? Rik wants to be smarter in figuring out which > pages to throw away. More work per page == worse for you. Rik is trying to solve the same issue in a different way. He is trying to manage gazillion entries better instead of reducing the entries to be managed. That can only work in a limited way. Drastic reductions in the entries to be manages have good effects in multiple ways. Reduce management overhead, improve I/O throughput, etc etc. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/