before the traditional kernel load address, which is 0x10, so
> those pages are before not in the kernel?
I believe the memory below the kernel load address on x86 is returned to the
free memory pool at some point during boot, which would explain those
addresses.
Dave McCracken
--
To u
during writeout.
You're assuming the system is static and won't allocate new pages behind your
back. We could be back to critically low memory before the write happens.
More broadly, we need to be proactive about getting dirty pages cleaned before
they consume the system. Deferring t
much time. You still need to do the page to virtual translation,
which kmap_atomic does for you.
Dave McCracken
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-in
On Tuesday 10 July 2007, Hugh Dickins wrote:
> On Tue, 10 Jul 2007, Dave McCracken wrote:
> > Given that RLIMIT_DATA is pretty much meaningless in current kernels, I
> > would put forward the argument that this change is extremely unlikely to
> > break anything because no one
On Tuesday 10 July 2007, Hugh Dickins wrote:
> On Mon, 9 Jul 2007, Dave McCracken wrote:
> > On Monday 09 July 2007, Herbert van den Bergh wrote:
> > > With this patch, not only memory in the data segment of a process, but
> > > also private data mappings, both fi
On Tuesday 10 July 2007, Nick Piggin wrote:
> On Tue, Jul 10, 2007 at 09:29:45AM -0500, Dave McCracken wrote:
> > I find myself wondering what "sufficiently convincing noises" are. I
> > think we can all agree that in the current kernel order>0 allocations are
>
's
patches raise the success rate of order>0 to within a few percent of
order==0. All this means is callers will need to decide how to handle the
infrequent failure. This should be true no matter what the order.
I strongly vote for merging these patches. Let's get them in mainline
f changes necessary to make these pages viable has yet been
accepted, ie antifrag, defrag, and variable page cache. While these changes
may yet all go in and work wonderfully, I applaud Nick's alternative solution
that does not include a depency on them.
Dave McCracken
-
To unsubscribe from
page for setrlimit(3p).
I believe this patch is a simple and obvious fix to a hole introduced when
libc malloc() began using mmap() instead of brk(). We took away the ability
to control how much data space processes could soak up. This patch returns
that control to the user.
Dave McCracken
CACHE_SIZE. Too many places have gotten it wrong too many
times. Let's go ahead with them even if we never implement variable cache
page size.
Dave McCracken
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PRO
This patch enables the full functionality of truncate for hugetlbfs
files. Truncate was originally limited to reducing the file size
because page faults were not supported for hugetlbfs. Now that page
faults have been implemented it is now possible to fully support
truncate.
Signed-off-by: Dave
Can't you just break
> out of the while loop on first successful match and populating the pmd?
> I would think you will find them to be the same pte page. Or did I miss
> some thing?
Man, I spaced that whole search code. I was sure I'd tested to make sure
it was finding matches. I
; of weeks and I should have sent all the per-page-table-page locking
> in to -mm (to replace the pte xchging currently there): that should
> give what you need for locking pts independent of the mm.
I'll look things over in more detail. I thought I had the locking issues
settled, but you
potential benefit.
This version of the patch supports i386 and x86_64. I have additional
patches to support ppc64, but they are not quite ready for public
consumption.
The patch is against 2.6.13.
Dave McCracken
--- 2.6.13/./arch/i386/Kconfig 2005-08-28 18:41:01.0 -0500
+++ 2.6.13-shpt
14 matches
Mail list logo