On Thu, Aug 11, 2005 at 01:28:08PM -0700, Dan Price wrote:
> On Thu 11 Aug 2005 at 11:56AM, Eric Saxe wrote:
> > > This might sound dumb, but are we sure that processes are the resource
> > > which is temporarily unavailable?
> >
> > I think what you are saying, Dan, is that "this might be so obvious that
> > it has escaped you", and you very well might be right. :)
> 
> The reason I ask is IIRC these were test cases about file locks:
> 
> > ...
> > Running:          c_lockf_10 for      0.50919 seconds
> > Running:         c_lockf_200fork: Resource temporarily unavailable
> > Running:             c_flockfork: Resource temporarily unavailable
> > Running:          c_flock_10fork: Resource temporarily unavailable
> > Running:         c_flock_200fork: Resource temporarily unavailable
> > Running:           c_fcntl_1fork: Resource temporarily unavailable
> > Running:          c_fcntl_10fork: Resource temporarily unavailable
> > Running:         c_fcntl_200fork: Resource temporarily unavailable
> 
> 
> I recently had a discussion with Devon O'Dell about how MacOS has limited
> the number of these that the OS is willing to create.  Based on that
> conversation I opened an RFE:
> 
> 6293764 RFE: resource control governing file range locking
> 
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6293764
> 
> Thanks,
> 
>         -dp
> 
> --
> Daniel Price - Solaris Kernel Engineering - [EMAIL PROTECTED] - 
> blogs.sun.com/dp

Well, Mac OS X (Darwin, really) has some stuff in kmem_alloc that
doesn't allow you to allocate more than N amount of some allocation
type, in this case, POSIX lock descriptors. DragonFly BSD actually
has a limit on the number of these a user can allocate; FreeBSD should
soon as well.

The limiting aspect in Solaris for actually abusing this lack of 
limitation of a kernel memory structure seems to be that to feasibly
attack it, one would need more file descriptors than are available by
default. Or root access, but then you've bigger problems anyway. I
think that I calculated that on a machine with a gig of RAM and dual
P3 866MHz processors, it'd take something like a week to abuse this
to any level that would exhaust kernel memory.

It's definitely something that can be limited. In DragonFly, we cut
the number down to something like 4096 (this isn't hard-coded; it's
based on the amount of RAM in the machine and 4096 is a number I
seem to recall being associated with a machine with 768MB RAM) and
haven't run into any problems with our software yet. Since other
enterprise database and various other potentially byte-range lock
intensive software is designed for Solaris, I'd really suggest more
people take a look into how many of these are actually used at any
given moment.

Touching on whether its actually the processes that are unavailable,
this seems like the most reasonable thing since:

a) Though I'm not familiar with libmicro, I'm assuming it's not
holding tons of file descriptors (it shouldn't need more than 255
open at any given time to test this benchmark), and

b) There's actually currently no limit on the amount of byte-range
locks you can hold.

--Devon
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to