On Thu, Mar 29, 2012 at 4:30 PM, Otto Moerbeek <o...@drijf.net> wrote:
> On Thu, Mar 29, 2012 at 01:31:17PM -0430, Andres Perera wrote:
>
>> On Thu, Mar 29, 2012 at 11:29 AM, Otto Moerbeek <o...@drijf.net> wrote:
>> > On Thu, Mar 29, 2012 at 10:54:48AM -0430, Andres Perera wrote:
>> >
>> >> On Thu, Mar 29, 2012 at 10:38 AM, Paul de Weerd <we...@weirdnet.nl>
wrote:
>> >> > On Thu, Mar 29, 2012 at 10:24:27AM -0430, Andres Perera wrote:
>> >> > | > Instead, you'll crank your file limits to... let me guess,
unlimited?
>> >> > | >
>> >> > | > And when you hit the system-wide limit, then what happens?
>> >> > | >
>> >> > | > Then it is our systems problem, isn't it.
>> >> > | >
>> >> > |
>> >> > | i am not sure if you're a suggesting that each program do getrlimit
>> >> > | and acquire resources based on that, because it's a pita
>> >> >
>> >> > Gee whiz, writing programs is hard! B Let's go shopping!
>> >> >
>> >> > | what they could do is offer a reliable estimate (e.g. 5 open files
per
>> >> > | tab required)
>> >> >
>> >> > Or just try to open a file, *CHECK THE RETURNED ERROR CODE* and (if
>> >> > any) *DEAL WITH IT*
>> >>
>> >> but we're only talking about one resource and one error condition
>> >>
>> >> write wrappers for open, malloc, etc
>> >>
>> >> avoiding errors regarding stack limits is not as easy
>> >
>> > There are very few programs that actually hit stack limits. MOst cases
>> > it's unbounded recursion, signalling an error.
>>
>> doesn't change the fact that preempting it takes modifying your
>> compiler's typical function prelude (and slowing down each call)
>>
>> additionally, anticipating FSIZE would greatly slow done each write
>>
>> so no, you can't just "be correct" all the time and pat your self on the
back
>>
>> >
>> >>
>> >> obviously there's no reason for: a. every application replicating
>> >> these wrappers (how many xmallocs have you seen, honest?) and b. the
>> >> system not providing a consistent api
>> >
>> > Nah, you cannot create a apifor this stuff, proper error handling and
>> > adaptation to recousrce limits is a program specfic thing.
>>
>> well, if including logic that gracefully handles the stack limit is
>> not important on the basis of most application's needs, then i don't
>> see how the reverse relation couldn't justify a library with xmalloc
>> and similar. *most* applications that implement this function copy
>> paste the same fatal version. see also `#define MIN/MAX`
>
> You just seem to argue for the sake of it. Anyway....
>
> A lot of programs have a *static* limit on stack depth, so those
> programs do not have that problem.
>
> For programs where the stack depth is a functon of the input (for e.g.
> parser and expression evaluation), there are well known techniques to
> control the maxium depth. Most of these programs actually have their
> own parse stack management and do not use the function stack for
> that.
>
> In my experience, I only have seen programs hitting stacks limit when
> the stack limit was very low, like 64k or so. Hitting the stack limit
> is not a real world problem. Our default stack limit is 4M: big enough
> for virtually any program, and small enough to catch unbounded
> recursion before it will eat all vm.
>
> Hitting mem or fd limit *is* as real world problem. Beacuse both
> memory and fd usage can build up, even in a well written program. In
> contrast to stack usage.

in my system, hitting fd limit is completely an artificial problem. i
have 8 gigs of memory and struct file is 120 bytes on amd64. the
default low limit is as silly as would be a 64k stack limit. if i were
designing a browser for machines like these, i wouldn't waste time
optimizing fd usage

even if i had access to the same browser you guys use, which magically
multiplexes a single socket over all connections, including ipc with
child processes that house tabs and plugins like google chrome, i
could afford not to give a shit when tiny fds go to waste whenever i
tried the bloated alternatives

>
> And just using xmalloc or similar for those cases is often not a
> solution, epsecially not for daemon programs. Handling resource
> exhaustion is a difficult problem that cannot be "solved" by just
> quiting your program, even if a lot of program do so.
>
> B  B  B  B -Otto

Reply via email to