On 02/06/2019 02:03 PM, Christopher Lameter wrote:
> On Thu, 31 Jan 2019, Aneesh Kumar K.V wrote:
>
>> I would be interested in this topic too. I would like to
>> understand the API and how it can help exploit the different type of
>> devices we have on OpenCAPI.
Same here, we/RedHat have quite a b
On 08/01/2012 08:32 AM, Michal Hocko wrote:
I am really lame :/. The previous patch is wrong as well for goto out
branch. The updated patch as follows:
This patch worked fine Michal! You and Mel can duke it out over who's
is best. :)
Larry
--
To unsubscribe from this list: send the line "un
On 07/31/2012 04:06 PM, Michal Hocko wrote:
On Tue 31-07-12 13:49:21, Larry Woodman wrote:
On 07/31/2012 08:46 AM, Mel Gorman wrote:
Fundamentally I think the problem is that we are not correctly detecting
that page table sharing took place during huge_pte_alloc(). This patch is
longer and
On 07/31/2012 04:06 PM, Michal Hocko wrote:
On Tue 31-07-12 13:49:21, Larry Woodman wrote:
On 07/31/2012 08:46 AM, Mel Gorman wrote:
Fundamentally I think the problem is that we are not correctly detecting
that page table sharing took place during huge_pte_alloc(). This patch is
longer and
ting is wrong, it causes
bugs like this one reported by Larry Woodman
[ 1106.156569] [ cut here ]
[ 1106.161731] kernel BUG at mm/filemap.c:135!
[ 1106.166395] invalid opcode: [#1] SMP
[ 1106.170975] CPU 22
[ 1106.173115] Modules linked in: bridge stp llc sunrpc binfm
On 07/31/2012 08:46 AM, Mel Gorman wrote:
On Mon, Jul 30, 2012 at 03:11:27PM -0400, Larry Woodman wrote:
That is a surprise. Can you try your test case on 3.4 and tell us if the
patch fixes the problem there? I would like to rule out the possibility
that the locking rules are slightly
On 07/27/2012 06:23 AM, Mel Gorman wrote:
On Thu, Jul 26, 2012 at 11:48:56PM -0400, Larry Woodman wrote:
On 07/26/2012 02:37 PM, Rik van Riel wrote:
On 07/23/2012 12:04 AM, Hugh Dickins wrote:
I spent hours trying to dream up a better patch, trying various
approaches. I think I have a nice
On 07/27/2012 06:23 AM, Mel Gorman wrote:
On Thu, Jul 26, 2012 at 11:48:56PM -0400, Larry Woodman wrote:
On 07/26/2012 02:37 PM, Rik van Riel wrote:
On 07/23/2012 12:04 AM, Hugh Dickins wrote:
I spent hours trying to dream up a better patch, trying various
approaches. I think I have a nice
On 07/26/2012 11:48 PM, Larry Woodman wrote:
Mel, did you see this???
Larry
This patch looks good to me.
Larry, does Hugh's patch survive your testing?
Like I said earlier, no. However, I finally set up a reproducer that
only takes a few seconds
on a large system and this to
On 07/26/2012 02:37 PM, Rik van Riel wrote:
On 07/23/2012 12:04 AM, Hugh Dickins wrote:
I spent hours trying to dream up a better patch, trying various
approaches. I think I have a nice one now, what do you think? And
more importantly, does it work? I have not tried to test it at all,
that I
On 07/26/2012 02:37 PM, Rik van Riel wrote:
On 07/23/2012 12:04 AM, Hugh Dickins wrote:
I spent hours trying to dream up a better patch, trying various
approaches. I think I have a nice one now, what do you think? And
more importantly, does it work? I have not tried to test it at all,
that I
On 07/26/2012 01:42 PM, Rik van Riel wrote:
On 07/23/2012 12:04 AM, Hugh Dickins wrote:
Please don't be upset if I say that I don't like either of your patches.
Mainly for obvious reasons - I don't like Mel's because anything with
trylock retries and nested spinlocks worries me before I can eve
On 07/20/2012 09:49 AM, Mel Gorman wrote:
+retry:
mutex_lock(&mapping->i_mmap_mutex);
vma_prio_tree_foreach(svma,&iter,&mapping->i_mmap, idx, idx) {
if (svma == vma)
continue;
+ if (svma->vm_mm == vma->vm_mm)
+
balance_pgdat() calls zone_watermark_ok() three times, the first call
passes a zero(0) in as the 4th argument. This 4th argument is the
classzone_idx which is used as the index into the zone->lowmem_reserve[]
array.
Since setup_per_zone_lowmem_reserve()
always sets the zone->lowmem_reserve[0]
balance_pgdat() calls zone_watermark_ok() three times, the first call
passes a zero(0) in as the 4th argument. This 4th argument is the
classzone_idx which is used as the index into the
zone->lowmem_reserve[] array. Since setup_per_zone_lowmem_reserve()
always sets the zone->lowmem_reserve[0]
shared.
The attached patch skips copying the ptes and the get_page() calls if
the hugetlbpage pagetable
is shared.
Signed-off-by: Larry Woodman <[EMAIL PROTECTED]>
--- linux-2.6.23/mm/hugetlb.c.orig 2008-01-16 12:05:41.496448000 -0500
+++ linux-2.6.23/mm/hugetlb.c 2008-01-17 10:27:21.740
The show_mem() output does not include the total number of
pagecache pages. This would be helpful when analyzing the
debug information in the /var/log/messages file after OOM kills
occur.
This patch includes the total pagecache pages in that output:
Signed-off-by: Larry Woodman <[EM
Rik van Riel wrote:
On Fri, 04 Jan 2008 17:34:00 +0100
Andi Kleen <[EMAIL PROTECTED]> wrote:
Lee Schermerhorn <[EMAIL PROTECTED]> writes:
We can easily [he says, glibly] reproduce the hang on the anon_vma lock
Is that a NUMA platform? On non x86? Perhaps you just need queued s
On Tue, 2007-04-17 at 14:39 -0700, Christoph Lameter wrote:
>
> It recreates the old problem that we OOM while we still have memory
> in other parts of the system.
How, by the time we get here we have already decided we are going to
OOMkill or panic. This change just obeys sysctl_panic_on_oom
:
Signed-off-by: Larry Woodman <[EMAIL PROTECTED]>
--- linux-2.6.18.noarch/mm/oom_kill.c.orig
+++ linux-2.6.18.noarch/mm/oom_kill.c
@@ -431,6 +437,9 @@ void out_of_memory(struct zonelist *zone
cpuset_lock();
read_lock(&tasklist_lock);
+ /* check if we are going to pa
20 matches
Mail list logo