On Fri, Jul 08, 2011 at 02:43:11PM +0100, Cal Leeming [Simplicity Media Ltd] wrote: > Sorry, I should have given a bit more detail. > > Using ulimit is going to be an issue as it relies on the host allowing > users to modify their ulimit (some aren't allowed). It also then > applies that rule to any other processes within that user, which is > bad as different processes may need different limits.
My bash(1) man page says the following: "Provides control over the resources available to the shell and to processes started by it, on systems that allow such control." Also, setrlimit(3) states this: "A child process created via fork(2) inherits its parent's resource limits. Resource limits are preserved across execve(2)." Which means, if you set a ulimit in a process, only the process itself and its children are affected. At least that's how I understand it. My understanding is backed by the fact that online judges used in ACM ICPC-like contest use setrlimit in their sandboxes to limit the memory for submitted solutions. (-; > I have seen other examples in the past of binaries being wrapped, > which then applied a memory limit, but I can't for the life of me > remember what it was called. :( Just running it as bash -c 'ulimit -S <set your limits here>; command' should be enough. Or if you want, you can create your own wrapper in C which calls setrlimit and then execs the binary. Michal
signature.asc
Description: Digital signature