gestion, and vice
versa; which is both counterintuitive and counterproductive.
On Wed, Oct 15, 2014 at 1:05 PM, Andrew Morton
wrote:
> On Wed, 15 Oct 2014 12:58:35 -0700 Jamie Liu wrote:
>
>> shrink_page_list() counts all pages with a mapping, including clean
>> pages, toward nr_
they count for nr_dirty.
Signed-off-by: Jamie Liu
---
mm/vmscan.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index dcb4707..ad9cd9f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -875,7 +875,8 @@ static unsigned long shrink_page_list(struct list
In the second half of scan_swap_map()'s scan loop, offset is set to
si->lowest_bit and then incremented before entering the loop for the
first time, causing si->swap_map[si->lowest_bit] to be skipped.
Signed-off-by: Jamie Liu
---
mm/swapfile.c | 3 ++-
1 file changed, 2 in
Hi Andreas,
Just calling cond_resched() does appear to be the more general
solution, and is already on tj/wq/for-next as
b22ce2785d97423846206cceec4efee0c4afd980 "workqueue: cond_resched()
after processing each work item".
Thanks,
Jamie
On Thu, Aug 29, 2013 at 1:45 PM, Andreas Mohr wrote:
>> Is
it by invoking cond_resched() after executing each work item.
>
> Signed-off-by: Tejun Heo
> Reported-by: Jamie Liu
> References: http://thread.gmane.org/gmane.linux.kernel/1552567
> Cc: sta...@vger.kernel.org
> --
> kernel/workqueue.c |9 +
> 1 file changed,
: Jamie Liu
---
include/linux/stop_machine.h | 13 +
kernel/stop_machine.c| 16
kernel/workqueue.c | 4 +++-
3 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h
index 3b5e910..a315f92
6 matches
Mail list logo