On Wed, 2003-06-18 at 16:56, Sascha Schumann wrote:
> > That's good, but that's only one step.  Some system mallocs are very
> > inefficient on small blocks so it might be very worthwhile to grab a
> > "chunk" of, say, 64k, instead of many small mallocs of 20 bytes.
>     How do you know that this is a significant problem?

Because I have written my own operating system, including allocator, and
I know that it can be hard to do it right.  Also, I have read that at
least previous versions of SunOS and BSD have this problem
(over-consumption of memory with many small mallocs).  I would think
that this also applies to Linux a little bit.  Prove me wrong, but I am
fairly certain that doing 100 000 allocations of 20 bytes is a whole lot
1) slower and 2) more memory consuming than allocating 2 megs at once.

This is what all research I have read comes down to: the bigger the
blocks that you allocate from the system, the better your performance. 
You are very welcome to show me research that shows the opposite, but I
doubt that anyone would even consider researching this since it is a 
fairly accepted computer science thesis really.
--
Best regards,

Per Lundberg / Capio ApS
Phone: +46-18-4186040
Fax: +46-18-4186049
Web: http://www.nobolt.com

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to