On Wed, 20 Mar 2013, Paul E. McKenney wrote:
> > > Another approach is to offload RCU callback processing to "rcuo" kthreads
> > > using the CONFIG_RCU_NOCB_CPU=y. The specific CPUs to offload may be
> > > selected via several methods:
Why are there multiple rcuo threads? Would a single thread t
On Mon, 4 Feb 2013, James Hogan wrote:
> I've hit boot problems in next-20130204 on Meta:
Meta is an arch that is not in the tree yet? How would I build for meta?
What are the values of
MAX_ORDER
PAGE_SHIFT
ARCH_DMA_MINALIGN
CONFIG_ZONE_DMA
?
--
To unsubscribe from this list: send the line "u
On Mon, 4 Feb 2013, Stephen Warren wrote:
> Here, if defined(ARCH_DMA_MINALIGN), then KMALLOC_MIN_SIZE isn't
> relative-to/derived-from KMALLOC_SHIFT_LOW, so the two may become
> inconsistent.
Right. And kmalloc_index() will therefore return KMALLOC_SHIFT_LOW which
will dereference a NULL pointer
On Tue, 5 Feb 2013, Steven Rostedt wrote:
> Ping?
Obviously correct.
Acked-by: Christoph Lameter
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majord
.
Signed-off-by: Christoph Lameter
Index: linux/include/linux/slab.h
===
--- linux.orig/include/linux/slab.h 2013-02-05 10:30:53.917724146 -0600
+++ linux/include/linux/slab.h 2013-02-05 10:31:01.181836707 -0600
@@ -133,6 +133,19
On Tue, 5 Feb 2013, James Hogan wrote:
> On 05/02/13 16:36, Christoph Lameter wrote:
> > OK I was able to reproduce it by setting ARCH_DMA_MINALIGN in slab.h. This
> > patch fixes it here:
> >
> >
> > Subject: slab: Handle ARCH_DMA_MINALIGN correctly
> >
On Tue, 5 Feb 2013, Stephen Warren wrote:
> > +/*
> > + * Some archs want to perform DMA into kmalloc caches and need a guaranteed
> > + * alignment larger than the alignment of a 64-bit integer.
> > + * Setting ARCH_KMALLOC_MINALIGN in arch headers allows that.
> > + */
> > +#if defined(ARCH_DMA_
On Thu, 7 Feb 2013, Ingo Molnar wrote:
> Agreed?
Yes and please also change the texts in Kconfig to accurately describe
what happens to the timer tick.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo i
On Thu, 7 Feb 2013, Frederic Weisbecker wrote:
> Not with hrtick.
hrtick? Did we not already try that a couple of years back and it turned
out that the overhead of constantly reprogramming a timer via the PCI bus
was causing too much of a performance regression?
--
To unsubscribe from this list:
On Fri, 8 Feb 2013, Steven Rostedt wrote:
> On Fri, 2013-02-08 at 16:53 +0100, Frederic Weisbecker wrote:
> > 2013/2/7 Christoph Lameter :
> > > On Thu, 7 Feb 2013, Frederic Weisbecker wrote:
> > >
> > >> Not with hrtick.
> > >
> > > hrtic
On Fri, 8 Feb 2013, Clark Williams wrote:
> I was a little apprehensive when you started talking about multiple
> tasks in Adaptive NOHZ mode on a core but the more I started thinking
> about it, I realized that we might end up in a cooperative multitasking
> mode with no tick at all going. Multip
On Tue, 2 Apr 2013, Pekka Enberg wrote:
> On Tue, Mar 19, 2013 at 7:10 AM, Joonsoo Kim wrote:
> > Could you pick up 1/3, 3/3?
> > These are already acked by Christoph.
> > 2/3 is same effect as Glauber's "slub: correctly bootstrap boot caches",
> > so should skip it.
>
> Applied, thanks!
Could y
On Tue, 2 Apr 2013, Joonsoo Kim wrote:
> We need one more fix for correctness.
> When available is assigned by put_cpu_partial, it doesn't count cpu slab's
> objects.
> Please reference my old patch.
>
> https://lkml.org/lkml/2013/1/21/64
Could you update your patch and submit it again?
--
To u
On Tue, 2 Apr 2013, Hugh Dickins wrote:
> I am strongly in favour of removing that limitation from
> __isolate_lru_page() (and the thread you pointed - thank you - shows Mel
> and Christoph were both in favour too); and note that there is no such
> restriction in the confusingly similar but differ
On Thu, 4 Apr 2013, Joonsoo Kim wrote:
> Pekka alreay applied it.
> Do we need update?
Well I thought the passing of the count via lru.next would be something
worthwhile to pick up.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@v
It seems that nohz still has no effect.
3.9-rc5 + patches. Affinity of init set to 0,1 so no
tasks are running on 9. The "latencytest" used here is part of my
lldiag-0.15 toolkit.
First test without any special kernel parameters. nohz off right?
$ nice -5 taskset -c 9 latencytest
CPUs: Freq=2.9
On Thu, 4 Apr 2013, Gilad Ben-Yossef wrote:
> Here is the last version I posted over a year ago. You were CCed and
> provided very useful feedback:
>
> http://lkml.indiana.edu/hypermail/linux/kernel/1205.0/01291.html
Ah. yes I remember now.
> Based on your feedback I re-spun them but never gotte
On Fri, 5 Apr 2013, Minchan Kim wrote:
> > >> How about add a knob?
> > >
> > >Maybe, volunteering?
> >
> > Hi Minchan,
> >
> > I can be the volunteer, what I care is if add a knob make sense?
>
> Frankly sepaking, I'd like to avoid new knob but there might be
> some workloads suffered from mlocke
On Fri, 5 Apr 2013, Joonsoo Kim wrote:
> Here goes a patch implementing Christoph's idea.
> Instead of updating my previous patch, I re-write this patch on top of
> your slab/next tree.
Acked-by: Christoph Lameter
--
To unsubscribe from this list: send the line "unsubscribe l
On Mon, 28 Jan 2013, Kent Overstreet wrote:
> > It goes down to how we allocate page tables. percpu depends on
> > vmalloc space allocation which in turn depends on page table
> > allocation which unfortunately assumes GFP_KERNEL and is spread all
> > across different architectures. Adding @gfp
On Mon, 28 Jan 2013, Frederic Weisbecker wrote:
> My last concern is the dependency on CONFIG_64BIT. We rely on cputime_t
> being u64 for reasonable nanosec granularity implementation. And therefore
> we need a single instruction fetch to read kernel cpustat for atomicity
> requirement against con
On Mon, 28 Jan 2013, Frederic Weisbecker wrote:
> 2013/1/28 Christoph Lameter :
> > On Mon, 28 Jan 2013, Frederic Weisbecker wrote:
> >
> >> My last concern is the dependency on CONFIG_64BIT. We rely on cputime_t
> >> being u64 for reasonable nanosec granular
On Sat, 23 Feb 2013, JoonSoo Kim wrote:
> With flushing, deactivate_slab() occur and it has some overhead to
> deactivate objects.
> If my patch properly fix this situation, it is better to use mine
> which has no overhead.
Well this occurs during boot and its not that performance critical.
--
T
On Mon, 25 Feb 2013, Rik van Riel wrote:
> On 02/25/2013 12:18 PM, Aaron Tomlin wrote:
>
> > mm: slab: Verify the nodeid passed to cache_alloc_node
> >
> > If the nodeid is > num_online_nodes() this can cause an
> > Oops and a panic(). The purpose of this patch is to assert
> > if this conditi
On Wed, 27 Feb 2013, Glauber Costa wrote:
> You can apply this one as-is with Christoph's ACK.
Right.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
The problem is that the subsystem attempted to call kfree with a pointer
that was not obtained via a slab allocation.
On Sat, 16 Feb 2013, Denys Fedoryshchenko wrote:
> Hi
>
> Worked for a while on 3.8.0-rc7, generally it is fine, then suddenly laptop
> stopped responding to keyboard and mouse.
>
Maybe the result of free pointer corruption due to writing to an object
after free. Please run again with slub_debug specified on the commandline
to get detailed reports on how this came about.
On Sun, 17 Feb 2013, Sasha Levin wrote:
> Hi all,
>
> I was fuzzing with trinity inside a KVM tools gue
On Fri, 22 Feb 2013, Glauber Costa wrote:
> Although not verified in practice, I also point out that it is not safe to
> scan
> the full list only when debugging is on in this case. As unlikely as it is, it
> is theoretically possible for the pages to be full. If they are, they will
> become unre
On Fri, 22 Feb 2013, Glauber Costa wrote:
> As I've mentioned in the description, the real bug is from partial slabs
> being temporarily in the cpu_slab during a recent allocation and
> therefore unreachable through the partial list.
The bootstrap code does not use cpu slabs but goes directly to
On Fri, 22 Feb 2013, Glauber Costa wrote:
> At this point, we are already slab_state == PARTIAL, while
> init_kmem_cache_nodes will only differentiate against slab_state == DOWN.
kmem_cache_node creation runs before PARTIAL and kmem_cache runs
after. So there would be 2 kmem_cache_node structures
On Fri, 22 Feb 2013, Glauber Costa wrote:
> On 02/22/2013 08:10 PM, Christoph Lameter wrote:
> > kmem_cache_node creation runs before PARTIAL and kmem_cache runs
> > after. So there would be 2 kmem_cache_node structures allocated. Ok so
> > that would use cpu slabs and theref
On Fri, 22 Feb 2013, Glauber Costa wrote:
> After we create a boot cache, we may allocate from it until it is bootstraped.
> This will move the page from the partial list to the cpu slab list. If this
> happens, the loop:
Acked-by: Christoph Lameter
--
To unsubscribe from this list:
An earlier fix to this is available here:
https://patchwork.kernel.org/patch/1975301/
and
https://lkml.org/lkml/2013/1/15/55
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kerne
Argh. This one was the final version:
https://patchwork.kernel.org/patch/2009521/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ
On Fri, 22 Feb 2013, Glauber Costa wrote:
> On 02/22/2013 09:01 PM, Christoph Lameter wrote:
> > Argh. This one was the final version:
> >
> > https://patchwork.kernel.org/patch/2009521/
> >
>
> It seems it would work. It is all the same to me.
> Which one
945cf2b6199b ("mm/sl[aou]b: Extract a common function
> for kmem_cache_destroy"). All uses of slab_error() are now guarded by DEBUG.
Subject: Slab: Only define slab_error for DEBUG
There is no use case left for slab builds without DEBUG.
Signed-o
On Wed, 30 Jan 2008, Jack Steiner wrote:
> > Seems that we cannot rely on the invalidate_ranges for correctness at all?
> > We need to have invalidate_page() always. invalidate_range() is only an
> > optimization.
> >
>
> I don't understand your point "an optimization". How would invalidate_ran
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> > I think Andrea's original concept of the lock in the mmu_notifier_head
> > structure was the best. I agree with him that it should be a spinlock
> > instead of the rw_lock.
>
> BTW, I don't see the scalability concern with huge number of tasks:
>
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> > - void (*invalidate_range)(struct mmu_notifier *mn,
> > + void (*invalidate_range_begin)(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > -unsigned long start, unsigned long end,
> >
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> > H.. exit_mmap is only called when the last reference is removed
> > against the mm right? So no tasks are running anymore. No pages are left.
> > Do we need to serialize at all for mmu_notifier_release?
>
> KVM sure doesn't need any locking t
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> On Wed, Jan 30, 2008 at 04:01:31PM -0800, Christoph Lameter wrote:
> > How we offload that? Before the scan of the rmaps we do not have the
> > mmstruct. So we'd need another notifier_rmap_callback.
>
> My assumption is t
Patch to
1. Remove sync on notifier_release. Must be called when only a
single process remain.
2. Add invalidate_range_start/end. This should allow safe removal
of ranges of external ptes without having to resort to a callback
for every individual page.
This must be able to nest so t
On Wed, 30 Jan 2008, Robin Holt wrote:
> > Well the GRU uses follow_page() instead of get_user_pages. Performance is
> > a major issue for the GRU.
>
> Worse, the GRU takes its TLB faults from within an interrupt so we
> use follow_page to prevent going to sleep. That said, I think we
> could
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> On Wed, Jan 30, 2008 at 06:08:14PM -0800, Christoph Lameter wrote:
> > hlist_for_each_entry_safe_rcu(mn, n, t,
>
>
> > &mm-
One possible way that XPmem could deal with a call of
invalidate_range_start with the lock flag set:
Scan through the rmaps you have for ptes. If you find one then elevate the
refcount of the corresponding page and mark in the maps that you have done
so. Also make them readonly. The increased r
On Wed, 30 Jan 2008, Harvey Harrison wrote:
> Signed-off-by: Harvey Harrison <[EMAIL PROTECTED]>
> ---
> mm/slub.c | 15 ++-
> 1 files changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 5cc4b7d..f9a20bf 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
x86 supports booting from a node without RAM?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
sufficient.
If we do not care about page referenced status then callback #3 can also
be omitted.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Signed-off-by: Robin Holt <[EMAIL PROTECTED]>
---
mm/rmap.c | 22 +++---
1 file changed, 19 insertions(+), 3 deletio
PROTECTED]>
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/filemap_xip.c |4
mm/fremap.c |3 +++
mm/hugetlb.c |3 +++
mm/memory.c | 15 +--
mm/mmap.c|2 ++
5 files changed, 25 insertions(+), 2 deletions(-)
Index: lin
I hope this is finally a release that covers all the requirements. Locking
description is at the top of the core patch.
This is a patchset implementing MMU notifier callbacks based on Andrea's
earlier work. These are needed if Linux pages are referenced from something
else than tracked by the rmap
of new references.
invalidate_range_end() reenables the establishment of references.
atomic indicates that the function is called in an atomic context.
We can sleep if atomic == 0.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Signed-off-by: Andrea Arcangeli <[EMAIL
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> On Wed, Jan 30, 2008 at 08:57:52PM -0800, Christoph Lameter wrote:
> > @@ -211,7 +212,9 @@ asmlinkage long sys_remap_file_pages(uns
> > spin_unlock(&mapping->i_mmap_lock);
> > }
> >
> > +
On Thu, 31 Jan 2008, Andrea Arcangeli wrote:
> My suggestion is to add the invalidate_range_start/end incrementally
> with this, and to keep all the xpmem mmu notifiers in a separate
> incremental patch (those are going to require many more changes to
> perfect). They've very different things. GRU
fork
On fork we change ptes in cow mappings to readonly. This means we must
invalidate the ptes so that they are reestablished later with proper
permission.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/memory.c |6 ++
1 file changed, 6 insertions(+)
Index: linux-
On Thu, 31 Jan 2008, WANG Cong wrote:
> index 6ce9f3a..4ebbe15 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -16,6 +16,7 @@
> #include
> #include
> #include
> +#include
Please also remove the #include . It should have been
part of
a patch
reversal.
--
T
get rid
of the invalidate_all() callback.
During the final teardown no mmu_notifier calls are registered anymore which
will speed up exit processing.
Is this okay for KVM too?
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/mmu_notifier.h |4
mm/
!CONFIG_MMU_NOTIFIER)
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/mm_types.h |2 ++
1 file changed, 2 insertions(+)
Index: linux-2.6/include/linux/mm_types.h
===
--- linux-2.6.orig/include/linux/mm_t
On Thu, 31 Jan 2008, Christoph Lameter wrote:
> > pagefault against the main linux page fault, given we already have all
> > needed serialization out of the PT lock. XPMEM is forced to do that
>
> pt lock cannot serialize with invalidate_range since it is split. A range
> r
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
> I appreciate the review! I hope my entirely bug free and
> strightforward #v5 will strongly increase the probability of getting
> this in sooner than later. If something else it shows the approach I
> prefer to cover GRU/KVM 100%, leaving the overkill
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
> GRU. Thanks to the PT lock this remains a totally obviously safe
> design and it requires zero additional locking anywhere (nor linux VM,
> nor in the mmu notifier methods, nor in the KVM/GRU page fault).
Na. I would not be so sure about having caught
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
> Good catch! This was missing also in my #v5 (KVM doesn't need that
> because the only possible cows on sptes can be generated by ksm, but
> it would have been a problem for GRU). The more I think about it, the
How do you think the GRU should know when
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
> On Thu, Jan 31, 2008 at 02:21:58PM -0800, Christoph Lameter wrote:
> > Is this okay for KVM too?
>
> ->release isn't implemented at all in KVM, only the list_del generates
> complications.
Why would the list_del generate pr
subject to
the mmap_sem locking.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/mremap.c |4
1 file changed, 4 insertions(+)
Index: linux-2.6/mm/mremap.c
===
--- linux-2.6.orig/mm/mremap.c 2008-01-25
On Thu, 31 Jan 2008, Robin Holt wrote:
> Jack has repeatedly pointed out needing an unregister outside the
> mmap_sem. I still don't see the benefit to not having the lock in the mm.
I never understood why this would be needed. ->release removes the
mmu_notifier right now.
--
To unsubscribe fr
On Thu, 31 Jan 2008, Jack Steiner wrote:
> Christoph, is it time to post a new series of patches? I've got
> as many fixup patches as I have patches in the original posting.
Maybe wait another day? This is getting a bit too frequent and so far we
have only minor changes.
--
To unsubscribe from
On Thu, 31 Jan 2008, Robin Holt wrote:
> On Thu, Jan 31, 2008 at 05:57:25PM -0800, Christoph Lameter wrote:
> > Move page tables also needs to invalidate the external references
> > and hold new references off while moving page table entries.
>
> I must admit to not
On Thu, 31 Jan 2008, Robin Holt wrote:
> > Mutex locking? Could you be more specific?
>
> I think he is talking about the external locking that xpmem will need
> to do to ensure we are not able to refault pages inside of regions that
> are undergoing recall/page table clearing. At least that has
On Thu, 31 Jan 2008, Robin Holt wrote:
> Both xpmem and GRU have means of removing their context seperate from
> process termination. XPMEMs is by closing the fd, I believe GRU is
> the same. In the case of XPMEM, we are able to acquire the mmap_sem.
> For GRU, I don't think it is possible, but
On Thu, 31 Jan 2008, Jack Steiner wrote:
> I currently unlink the mmu_notifier when the last GRU mapping is closed. For
> example, if a user does a:
>
> gru_create_context();
> ...
> gru_destroy_context();
>
> the mmu_notifier is unlinked and all task tables allocated
> b
On Thu, 31 Jan 2008, Robin Holt wrote:
> > + void (*invalidate_range_end)(struct mmu_notifier *mn,
> > +struct mm_struct *mm, int atomic);
>
> I think we need to pass in the same start-end here as well. Without it,
> the first invalidate_range would have to block fa
On Thu, 31 Jan 2008, Robin Holt wrote:
> > Index: linux-2.6/mm/memory.c
> ...
> > @@ -1668,6 +1678,7 @@ gotten:
> > page_cache_release(old_page);
> > unlock:
> > pte_unmap_unlock(page_table, ptl);
> > + mmu_notifier(invalidate_range_end, mm, 0);
>
> I think we can get an _end c
This is a patchset implementing MMU notifier callbacks based on Andrea's
earlier work. These are needed if Linux pages are referenced from something
else than tracked by the rmaps of the kernel (an external MMU).
The known immediate users are
KVM
- Establishes a refcount to the page via get_user_
ge_xx indicates that the
function
is called in an atomic context. We can sleep if atomic == 0.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>
---
include/linux/mm_types.h |8 +
include/linux/mmu_
callback
may be be omitted.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Signed-off-by: Robin Holt <[EMAIL PROTECTED]>
---
mm/rmap.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
Index: linux-2
PROTECTED]>
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
mm/filemap_xip.c |5 +
mm/fremap.c |3 +++
mm/hugetlb.c |3 +++
mm/memory.c | 24 ++--
mm/mmap.c|2 ++
mm/mremap.c |7 ++-
6 files changed, 4
bit!
A notifier that uses the reverse maps callbacks does not need to provide
the invalidate_page() methods that are called when locks are held.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
include/linux/mmu_notifier.h | 70 +--
include
On Fri, 1 Feb 2008, Robin Holt wrote:
> Maybe I haven't looked closely enough, but let's start with some common
> assumptions. Looking at do_wp_page from 2.6.24 (I believe that is what
> my work area is based upon). On line 1559, the function begins being
> declared.
Aah I looked at the wrong f
On Fri, 1 Feb 2008, Robin Holt wrote:
> OK. Now that release has been moved, I think I agree with you that the
> down_write(mmap_sem) can be used as our lock again and still work for
> Jack. I would like a ruling from Jack as well.
Talked to Jack last night and he said its okay.
--
To unsubscr
Argh. Did not see this soon enougn. Maybe this one is better since it
avoids the additional unlocks?
On Fri, 1 Feb 2008, Robin Holt wrote:
> do_wp_page can reach the _end callout without passing the _begin
> callout. This prevents making the _end unles the _begin has also
> been made.
>
> Inde
On Fri, 1 Feb 2008, Robin Holt wrote:
> Currently, it is calling mmu_notifier _begin and _end under the
> i_mmap_lock. I _THINK_ the following will make it so we could support
> __xip_unmap (although I don't recall ever seeing that done on ia64 and
> don't even know what the circumstances are for
On Fri, 1 Feb 2008, Andrea Arcangeli wrote:
> Note that my #v5 doesn't require to increase the page count all the
> time, so GRU will work fine with #v5.
But that comes with the cost of firing invalidate_page for every page
being evicted. In order to make your single invalidate_range work withou
On Fri, 1 Feb 2008, Robin Holt wrote:
> We are getting this callout when we transition the pte from a read-only
> to read-write. Jack and I can not see a reason we would need that
> callout. It is causing problems for xpmem in that a write fault goes
> to get_user_pages which gets back to do_wp_
On Fri, 1 Feb 2008, Robin Holt wrote:
> On Fri, Feb 01, 2008 at 03:19:32PM -0800, Christoph Lameter wrote:
> > On Fri, 1 Feb 2008, Robin Holt wrote:
> >
> > > We are getting this callout when we transition the pte from a read-only
> > > to read-write. Jack an
NO! Wrong fix. Was dropped from mainline.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Fri, 1 Feb 2008, Justin M. Forbes wrote:
>
> On Fri, 2008-02-01 at 16:39 -0800, Christoph Lameter wrote:
> > NO! Wrong fix. Was dropped from mainline.
>
> What is the right fix for the OOM issues with 2.6.22? Perhaps
> http://marc.info/?l=linux-mm&m=1199736538034
On Sun, 3 Feb 2008, Andrea Arcangeli wrote:
> On Thu, Jan 31, 2008 at 07:58:40PM -0800, Christoph Lameter wrote:
> > Ok. Andrea wanted the same because then he can void the begin callouts.
>
> Exactly. I hope the page-pin will avoid me having to serialize the KVM
> page fault
On Sun, 3 Feb 2008, Andrea Arcangeli wrote:
> > Right but that pin requires taking a refcount which we cannot do.
>
> GRU can use my patch without the pin. XPMEM obviously can't use my
> patch as my invalidate_page[s] are under the PT lock (a feature to fit
> GRU/KVM in the simplest way), this is
On Sat, 2 Feb 2008, Andi Kleen wrote:
> To be honest I've never tried seriously to make 32bit NUMA policy
> (with highmem) work well; just kept it at a "should not break"
> level. That is because with highmem the kernel's choices at
> placing memory are seriously limited anyways so I doubt 32bit
Updates for slub are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/christoph/vm.git slub-linus
Christoph Lameter (5):
SLUB: Fix sysfs refcounting
Move count_partial before kmem_cache_shrink
SLUB: rename defrag to remote_node_defrag_ratio
Hope to have the slub-mm repository setup tonight which will simplify
things for the future. Hope you still remember
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordo
On Tue, 5 Feb 2008, Nick Piggin wrote:
> I'm sure it could have an effect. But why is the common case in SLUB
> for the cacheline to be bouncing? What's the benchmark? What does SLAB
> do in that benchmark, is it faster than SLUB there? What does the
> non-atomic bit unlock do to Willy's database
On Tue, 5 Feb 2008, Nick Piggin wrote:
> > erk, sorry, I misremembered. I was about to merge all the patches we
> > weren't going to merge. oops.
>
> While you're there, can you drop the patch(es?) I commented on
> and didn't get an answer to. Like the ones that open code their
> own locking p
On Tue, 5 Feb 2008, Nick Piggin wrote:
> Ok. But the approach is just not so good. If you _really_ need something
> like that and it is a win over the regular non-atomic unlock, then you
> just have to implement it as a generic locking / atomic operation and
> allow all architectures to implement
On Tue, 5 Feb 2008, Nick Piggin wrote:
> Anyway, not saying the operations are useless, but they should be
> made available to core kernel and implemented per-arch. (if they are
> found to be useful)
The problem is to establish the usefulness. These measures may bring 1-2%
in a pretty unstable o
On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
> On Mon, Feb 04, 2008 at 11:09:01AM -0800, Christoph Lameter wrote:
> > On Sun, 3 Feb 2008, Andrea Arcangeli wrote:
> >
> > > > Right but that pin requires taking a refcount which we cannot do.
> > >
> > >
how they were
put onto the partial list.
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
---
Documentation/vm/slabinfo.c | 149
include/linux/slub_def.h| 23 ++
lib/Kconfig.debug | 11 +++
mm/slub.c
On Tue, 5 Feb 2008, Pekka J Enberg wrote:
> Hi Christoph,
>
> On Mon, 4 Feb 2008, Christoph Lameter wrote:
> > The statistics provided here allow the monitoring of allocator behavior
> > at the cost of some (minimal) loss of performance. Counters are placed in
> > SL
On Tue, 5 Feb 2008, Pekka J Enberg wrote:
> Heh, sure, but it's not exported to userspace which is required for
> slabinfo to display the statistics.
Well we could do the same as for numa stats. Output the global count and
then add
c=count
?
--
To unsubscribe from this list: send the line "u
Could we focus on the problem instead of discussion of new patches under
development? Can we confirm that what Kosaki sees is a bug?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.o
On Tue, 5 Feb 2008, Andrea Arcangeli wrote:
> given I never allow a coherency-loss between two threads that will
> read/write to two different physical pages for the same virtual
> adddress in remap_file_pages).
The other approach will not have any remote ptes at that point. Why would
there be a
1 - 100 of 5042 matches
Mail list logo