On 08/13/2013 03:06 PM, Stackpole, Chris wrote:
From: Kevin E. Thorpe [mailto:kevin.tho...@utoronto.ca]
Sent: Monday, August 12, 2013 11:00 AM
Subject: Re: [R] Memory limit on Linux?
What does "ulimit -a" report on both of these machines?
Greetings,
Sorry for the delay. Other fires demanded more attention...
For the system in which memory seems to allocate as needed:
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 386251
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 386251
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
For the system in which memory seems to hang around 5-7GB:
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 2066497
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I can also confirm the same behavior on a Scientific Linux system though the
"difference" besides CentOS/RHEL is that the Scientific is at an earlier
version of 6 (6.2 to be exact). The Scientific system has the same ulimit configuration
as the problem box.
I could be mistaken, but here are the differences I see in the ulimits:
pending signals: shouldn't matter
max locked memory: The Scientific/CentOS system is higher so I don't think this
is it.
stack size: Again, higher on Scientific/CentOS.
max user processes: Seems high to me, but I don't see how this is capping a
memory limit.
Am I missing something? Any help is greatly appreciated.
Thank you!
Chris Stackpole
It appears that at the shell level, the differences are not to blame.
It has been a long time, but years ago in HP-UX, we needed to change an
actual kernel parameter (this was for S-Plus 5 rather than R back then).
Despite the ulimits being acceptable, there was a hard limit in the
kernel. I don't know whether such things have been (or can be) built in
to your "problem" machine. If it is a multiuser box, it could be that
limits have been set to prevent a user from gobbling up all the memory.
The other thing to check is if R has/can be compiled with memory limits.
Sorry I can't be of more help.
--
Kevin E. Thorpe
Head of Biostatistics, Applied Health Research Centre (AHRC)
Li Ka Shing Knowledge Institute of St. Michael's
Assistant Professor, Dalla Lana School of Public Health
University of Toronto
email: kevin.tho...@utoronto.ca Tel: 416.864.5776 Fax: 416.864.3016
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.