Kyle Moffett writes:

> 1)  Linear FD allocation makes it IMPOSSIBLE for libraries to  
> reliably use persistent FDs behind an application's back.  For  

That's not completely true; for example, openlog() opens a file
descriptor for the library's own use, as does sethostent().  I agree
that it creates difficulties if the library implementor wants to use a
file descriptor in a set of functions that didn't previously use one,
but with a bit of assistance from the kernel, that can be solved
without breaking the ABI.

> for (i = 0; i < NR_OPEN; i++)
>       if (!fd_is_special_to_us(i))
>               close(i);
> 
> Note that this is conceptually buggy, but occurs in several major C  
> programming books, most of the major shells, and a lot of other  
> software to boot.

Buggy in what way?  In the use of the NR_OPEN constant?

> 3) In order to allocate new FDs, the kernel has to scan over a  
> (potentially very large) bitmap.  A process with 100,000 fds (not  
> terribly uncommon) would have 12.5kbyte of FD bitmap and would trash  
> the cache every time it tried to allocate an FD.

For specialized programs like that we can offer alternative fd
allocation strategies if necessary (although I expect that with
100,000 fds other things will limit performance more than
allocation).

None of those things is an excuse for breaking the ABI, however.
As I said to Davide, I was really protesting about the attitude that
we can just break the ABI however and whenever we like and force
programs to adapt.

Paul.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to