On Fri, 20 May 2016, LacaK wrote:
As new nodes are appended TFPList.Expand is called
And there is:
if FCapacity > 127 then Inc(IncSize, FCapacity shr 2);
So if I have in list 1 000 000 items, then at once list is expanded by
250 000, which causes in one step out of memory
So my question is: can I somehow control increment count ? I think that
ATM no.
So second question is can TFPList.Expand be modified, that for large
FCapacity will be used smaller increment ? :-)
TMemoryStream also has this behaviour, and indeed I override it in my own
applications to let the growth taper off after 128MB. If you have a lot of
midsized memory streams, it otherwise results in having 25%/2 = 12.5%
unused memory lieing around.
But of course all datastructures based on a single array (and thus inviting
very large block allocations) are fundamentally flawed because of it in the
first place.
What about to simply adjust Expand procedure like:
if FCapacity > 8*1024*1024 then IncSize := FCapacity shr 3
else if FCapacity > 128 then IncSize := FCapacity shr 2
else if FCapacity > 8 then IncSize := 16
else IncSize := 4;
SetCapacity(FCapacity + IncSize);
It does not solve your problem at a fundamental level.
Maybe it does today, but in a year you will again bump into this problem, if
your file grows. Then what ?
As Marco said: all datastructures based on a single array are flawed in this
regard.
Michael.
_______________________________________________
fpc-pascal maillist - fpc-pascal@lists.freepascal.org
http://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal