On Thu, Oct 13, 2011 at 01:37:24AM +0200, Riccardo Murri wrote: > Thus: > > - batch system schedulers do righteously consider each UML "thread" as > a separate process; > > - however, UML "threads" do share a large portion of the memory, as > can be seen from this "ps" output: > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 6467 admin 15 0 32.0g 13g 13g S 0.0 27.7 0:00.00 > kernel64-3.0.4 > 6466 admin 16 0 32.0g 13g 13g S 0.0 27.7 0:00.15 > kernel64-3.0.4 > 6465 admin 22 0 32.0g 13g 13g S 0.0 27.7 0:00.00 > kernel64-3.0.4 > 6458 admin 15 0 32.0g 13g 13g S 39.2 27.7 37:00.04 > kernel64-3.0.4 > 7437 admin 15 0 12.0g 12g 12g T 52.9 25.6 70:54.39 > kernel64-3.0.4 > > - so the problem lies in the algorithm that SGE and TORQUE apply for > computing the amount of memory used, which apparently just sums up > the total VSZ for each process (fast), instead of counting the > number of pages while ensuring that each shared page is counted only > once (slow)? > > Thanks for any clarification!
Correct on all counts (the first two anyway, and I bet you're right on the third). UML uses separate address spaces for its processes, thus they don't look like threads to anything else, but the bulk of the memory (the UML kernel) in those address spaces is shared. If you look at /proc/<pid>/smaps for a couple of UML processes, you should see the sharing. Jeff ------------------------------------------------------------------------------ All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2d-oct _______________________________________________ User-mode-linux-user mailing list User-mode-linux-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/user-mode-linux-user