On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd <we...@weirdnet.nl> wrote: > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote: > | > Instead, you'll crank your file limits to... let me guess, unlimited? > | > > | > And when you hit the system-wide limit, then what happens? > | > > | > Then it is our systems problem, isn't it. > | > > | > | i am not sure if you're a suggesting that each program do getrlimit > | and acquire resources based on that, because it's a pita > > Gee whiz, writing programs is hard! B Let's go shopping! > > | what they could do is offer a reliable estimate (e.g. 5 open files per > | tab required) > > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if > any) *DEAL WITH IT*
but we're only talking about one resource and one error condition write wrappers for open, malloc, etc avoiding errors regarding stack limits is not as easy obviously there's no reason for: a. every application replicating these wrappers (how many xmallocs have you seen, honest?) and b. the system not providing a consistent api after you're done writing all the wrappers for your crappy browser, what do you do? notify the user that no resources can be allocated, try pushing the soft limit first, whatever. they still have to re-exec with higher limits why even bother? > > > Note that on a busy system, the ulimit is not the only thing holding > you back. B You may actually run into the maximum number of files the > system can have open at any given time (sure, that's also tweakable). > Just doing getrlimit isn't going to be sufficient... doesn't matter > > Paul 'WEiRD' de Weerd > > -- >>++++++++[<++++++++++>-]<+++++++.>+++[<------>-]<.>+++[<+ > +++++++++++>-]<.>++[<------------>-]<+.--------------.[-] > B B B B B B B B http://www.weirdnet.nl/