Fair enough, but the problem with the cell allocations is that they alone
represent 37% of the run time. The clone calls (of which the cell
initialisation is part) is 70%. It's clear that reducing the number of
clone calls is key. The way to do that is to make sure that the
copy-on-write semantics are stable and working.

Then the is the issue of the reference counter. 26% of the time is spent
manipulating that. Perhaps moving to the Böhm collector is something to
investigate...

Regards,
Elias


On 25 April 2014 18:36, Juergen Sauermann <juergen.sauerm...@t-online.de>wrote:

>  Hi,
>
> just to mention it, cells are not allocated by their constructor because
> for cells "placement new" is always used. The allocation of all ravel
> cells is
> done by the Value constructor.
>
> So the 2.2 billion "allocations" are actually 2.2 billion ravel cell
> initializations
> (without involving memory allocation for each cell).
>
> I will nevertheless look into this; I was earlier thinking of a new
> FILE_IO function
> that returns an entire file.
>
> /// Jürgen
>
>
>
> On 04/25/2014 08:01 AM, Elias Mårtenson wrote:
>
> Actually, no. I don't actually do that. I only resize the array one every
> 1000 lines (configurable). Also, the time is not spent there.
>
>  As I mentioned, I ran it under Callgrind, and the time spent allocating
> arrays is actually minimal. What does take time is the 2.2 *billion* cell
> allocations and the 50 *million* calls to Value::clone(). Most of these
> calls clone a value that that is immediately discarded afterwards.
>
>  The solution is to avoid cloning of values that are not stored (that's
> the core of the "temp" idea). Right now the temp system is only used in
> some very specific cases, but once that can be used for Value::clone() is
> when we'll see the big performance boosts.
>
>  Regards,
> Elias
>
>
>
>
>>
>
>

Reply via email to