Joe Buck <[EMAIL PROTECTED]> writes:

> So the basic issue is this: why the hell does the linker need so much
> memory?  Sure, if you have tons available, it pays to trade memory for
> time, mmap everything, then build all the hashes you want to look up
> relationships in every direction.  But if it doesn't really fit, it's
> a big lose.  Ideally ld, ar and the like could detect and adapt if there
> isn't enough physical memory to hold everything.

At present the linker provides command line options --no-keep-memory
and --reduce-memory-overheads to significantly reduce the amount of
memory required during the link.

It should be possible in principle to partially adapt to available
memory based on, e.g., physmem_total.  The linker could keep track of
how much memory it has allocated via bfd_alloc and bfd_malloc.  If
that total gets to be 75% of physmem_total, or something like that,
the linker could switch to --no-keep-memory.

Unfortunately the decisions made by --reduce-memory-overhead apply at
the start of the link.  At that time it is difficult to tell how much
memory will be needed.

Ian

Reply via email to