To a two-CPU architecture, its page distribution just like theoretically
ABABAB
So every readahead of A process will create 4 unused readahead pages unless you
are sure B will resume soon.
Have you ever compared the results among UP, 2 or 4-CPU?
-
To unsubscribe from this list: send the lin
pages at all as you expected due to
Linux schedule. Thank you!
2007/2/22, Rik van Riel <[EMAIL PROTECTED]>:
yunfeng zhang wrote:
> Any comments or suggestions are always welcomed.
Same question as always: what problem are you trying to solve?
-
To unsubscribe from this list: send the
Any comments or suggestions are always welcomed.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Following arithmetic is based on SwapSpace bitmap management which is discussed
in the postscript section of my patch. Two purposes are implemented, one is
allocating a group of fake continual swap entries, another is re-allocating
swap entries in stage 3 for such as series length is too short.
, major changelogs are
1) pte_unmap pairs on shrink_pvma_scan_ptes and pps_swapoff_scan_ptes.
2) Now, kppsd can be woke up by kswapd.
3) New global variable accelerate_kppsd is appended to accelerate the
reclamation process when a memory inode is low.
Signed-off-by: Yunfeng Zhang <[EMAIL PROTEC
You have an interesting idea of "simplifies", given
16 files changed, 997 insertions(+), 25 deletions(-)
(omitting your Documentation), and over 7k more code.
You'll have to be much more persuasive (with good performance
results) to get us to welcome your added layer of complexity.
If the whole
Current test based on the fact below in my previous mail
Current Linux page allocation fairly provides pages for every process, since
swap daemon only is started when memory is low, so when it starts to scan
active_list, the private pages of processes are messed up with each other,
vmscan.c:shri
re-code my patch, tab = 8. Sorry!
Signed-off-by: Yunfeng Zhang <[EMAIL PROTECTED]>
Index: linux-2.6.19/Documentation/vm_pps.txt
===
--- /dev/null 1970-01-01 00:00:00.0 +
+++ linux-2.6.19/Documentation/vm_p
Patched against 2.6.19 leads to:
mm/vmscan.c: In function `shrink_pvma_scan_ptes':
mm/vmscan.c:1340: too many arguments to function `page_remove_rmap'
So changed
page_remove_rmap(series.pages[i], vma);
to
page_remove_rmap(series.pages[i]);
I've worked on 2.6.19, but when update to 2.6.20-r
it's compliant to Linux lock order
defined in mm/rmap.c.
2) When a memory inode is low, you can set scan_control::reclaim_node to let my
kppsd to reclaim the memory inode page.
Signed-off-by: Yunfeng Zhang <[EMAIL PROTECTED]>
Index: linux-2.6.1
2007/1/11, yunfeng zhang <[EMAIL PROTECTED]>:
2007/1/11, Rik van Riel <[EMAIL PROTECTED]>:
Have you actually measured this?
If your measurements saw any performance gains, with what kind
of workload did they happen, how big were they and how do you
explain those performance gains?
2007/1/11, Rik van Riel <[EMAIL PROTECTED]>:
Have you actually measured this?
If your measurements saw any performance gains, with what kind
of workload did they happen, how big were they and how do you
explain those performance gains?
How do you balance scanning the private memory with taking
ition of current
page-fault page.
3) It's conformable to POSIX madvise API family.
Signed-off-by: Yunfeng Zhang <[EMAIL PROTECTED]>
Index: linux-2.6.16.29/Documentation/vm_pps.txt
===
--- /dev/null 1970-01-01 00
Sorry, I can't be online regularly, that is, can't synchronize Linux CVS, so
only work on a fixed kernel version. Documentation/vm_pps.txt isn't only a patch
overview but also a changelog.
Great!
Do you have patch against 2.6.19?
Thanks!
--
Al
-
To unsubscribe from this list: send the li
Maybe, there should be a memory maintainer in linux kernel group.
Here, I show some content from my patch (Documentation/vm_pps.txt). In brief, I
make a revolution about Linux swap subsystem, the idea is described that
SwapDaemon should scan and reclaim pages on UserSpace::vmalist other than
curr
A new patch has been done by me, based on the previous quilt patch
(2.6.16.29). Here is
changelog
--
NEW
New kernel thread kppsd is added to execute background scanning task
periodically (mm/vmscan.c).
PPS statistic is added into /proc/meminfo, its prototype is in
inclu
No, a new idea to re-write swap subsystem at all. In fact, it's an
impossible task to me, so I provide a compromising solution -- pps
(pure private page system).
2006/12/30, Zhou Yingchao <[EMAIL PROTECTED]>:
2006/12/27, yunfeng zhang <[EMAIL PROTECTED]>:
> To multiple add
+0800
@@ -0,0 +1,192 @@
+ Pure Private Page System (pps)
+ Copyright by Yunfeng Zhang on GFDL 1.2
+ [EMAIL PROTECTED]
+ December 24-26, 2006
+
+// Purpose <([{
+The file is used to document
The job listed in Documentation/vm_pps.txt of my patch is too heavy to me, so
I'm appreciate that Linux kernel group can arrange a schedule to help me.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at ht
To multiple address space, multiple memory inode architecture, we can introduce
a new core object -- section which has several features
1) Section is used as the atomic unit to contain the pages of a VMA residing in
the memory inode of the section.
2) When page migration occurs among different m
ate_flag)
- migrate_back_legacy_linux(mm, vma);
- }
-}
--- patch-linux/kernel/timer.c 2006-12-26 15:20:02.688545256 +0800
+++ linux-2.6.16.29/kernel/timer.c 2006-09-13 02:02:10.0 +0800
@@ -845,2 +844,0 @@
-
- timer_flush_tlb_tasks(NULL);
--- patch-linux/ke
21 matches
Mail list logo