On Thu, 20 Jan 2000, Matthew Reimer wrote:
> My question is, should setuid() fail if the target user's maximum number
> of processes (RLIMIT_NPROC) would be exceeded?
>
> Background: in an attempt to manage our webserver to keep too many CGIs
> from taking down the machine, I've been experimenting with RLIMIT_NPROC.
> This appears to work fine when forking new processes, causing the fork
> to fail with error EAGAIN.
>
> However, this didn't solve our problem. We're using Apache with suexec,
> and still CGIs would multiply far beyond the specified resource limit.
>
> Apache forks suexec, which is suid root; fork1() increments the number
> of processes for root, unless RLIMIT_NPROC has been exceeded, in which
> case the fork fails with EAGAIN.
>
> suexec calls then calls setuid() (before it calls execv), which
> decrements root's process count and increments the target user's process
> count, but RLIMIT_NPROC is not consulted, and voila, we've just exceeded
> the target user's maximum process count.
Apache is a bizarre environment, so you have to be careful:
1. Generally root starts apache and it revokes privs down to the User
specified in the config file.
Problem: Limits are inherited from the process parent, which is in
this case root. I believe there is a specific function all to request
enforcement of new process limits for the user's class.
2. Apache has config-file overrides for various resource limits.
Problem: From #1 you may have set the soft limit higher than the
intended hard limit for the user.
I hope this illuminates some of the issues.
Doug White | FreeBSD: The Power to Serve
[EMAIL PROTECTED] | www.FreeBSD.org
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message