Hi!

On Mon, May 19, 2008 at 03:00:08PM +0200, Otto Moerbeek wrote:
>On Mon, May 19, 2008 at 02:38:35PM +0200, Hannah Schroeter wrote:
>> On Mon, May 12, 2008 at 05:49:57PM +0200, Otto Moerbeek wrote:
>> >[...]

>> Any chance to get rid of that 1G limit that seems more and more
>> arbitrary nowadays? I remember reading that just upping that define in
>> /usr/src/sys/arch/i386/include/vmparam.h doesn't help, i.e. that
>> something else interacts with that parameter too. I know that on
>> processors that have neither PAE nor non-PAE NX support one might not be
>> able to protect all writable data from execution eventually, if a
>> program should in fact allocate more than 1G (once the kernel should
>> need to allocate it with lower virtual addresses). However, the kernel
>> could be made to prefer high addresses for writable, non-executable data
>> (mmap without PROT_EXEC), and the super-user is to decide on how she
>> sets up the data size resource limits, so if that's <= 1G the protection
>> should remain to be fine.

>protection bits is only one of the things. there are more issues to
>consider when enlarging MAXDSIZE. for example, how do you divide the
>memory between sbrk() and mmap()?

How does Linux do it (where you can allocate about 3G of memory, IIRC
their kernel is mapped at about 0x30000000u which sets the boundary)?
Who does still use sbrk() after OpenBSD's malloc uses mmap only? Where
does the break for sbrk() start? If mmap() tends to allocate far away
from the break first, it impairs sbrk() as little as possible/as late as
possible. Of course, one can always construct pathological scenarios
where you can't allocate the full amount of memory due to fragmentation,
but even then there'll be more available than the current 1G, and errors
due to fragmentation can still be signalled (ENOMEM). No reason to
refuse allocations that *would* succeed.

>       -Otto

Kind regards,

Hannah.

Reply via email to