Dan Sugalski <[EMAIL PROTECTED]> writes:
>At 07:39 PM 4/19/2001 +0000, [EMAIL PROTECTED] wrote:
>>Depends what they are. The scheme effectively makes the part "mandatory"
>>as we will have allocated space whether used or not.
>
>Well, we were talking about all PMCs having an int, float, and pointer 
>part, so it's not like we'd be adding anything. Segregating them out might 
>make things faster for those cases where we don't actually care about the 
>data. OTOH that might be a trivially small percentage of the times the 
>PMC's accessed, so...

What is the plan for arrays these days? - if the float parts 
of the N*100 entries in a perl5-oid AV were collected you might 
get "packed" arrays by the back door.

>
>>So it depends if access pattern means that the part is seldom used,
>>or used in a different way.
>>As you say works well for GC of PMCs - and also possibly for compile-time
>>or debug parts of ops but is not obviously useful otherwise.
>
>That's what I was thinking, but my intuition's rather dodgy at this level. 
>The cache win might outweigh other losses.
>
>> >I'm thinking that passing around an
>> >arena address and offset and going in as a set of arrays is probably
>> >suboptimal in general,
>>
>>You don't, you pass PMC * and have offset embedded within the PMC
>>then arena base is (pmc - pmc->offset) iff you need it.
>
>I was trying to avoid embedding the offset in the PMC itself. Since it was 
>calculatable, it seemed a waste of space.

But passing extra args around is fairly expensive when they are 
seldom going to be used. Passing an extra arg through N-levels is
going to consume instructions and N * 32 bits of memory or so.

>
>If we made sure the arenas were on some power-of-two boundary we could just 
>mask the low bits off the pointer for the base arena address. Evil, but 
>potentially worth it at this low a level.

That would work ;-)

-- 
Nick Ing-Simmons <[EMAIL PROTECTED]>
Via, but not speaking for: Texas Instruments Ltd.

Reply via email to