Am 13.01.20 um 15:49 schrieb Daniel Sun:
Hi Jochen,

     The changed example will run faster than the original one because cache 
will always be hit for each argument. Actually we can make run even much 
faster, loop more than 100_000 times,  e.g. 1_000_000, because the optimize 
threshold reaches, the related MH will be linked to the cs.

       As you said, we should test it and not just guess ;-)

[...]
def same(String obj) { return obj }
def same(int obj) { return obj }
def same(float obj) { return obj }
for (r in [1, 1.0f, '1.0']) {
   for (int i = 0; i < 100_000; i++) {
     r.each { same(it) }
   }
}

this will first run 100k times with integer as receiver, then 100k times
with float and finally 100k times with String. The old implementation
would here now reset the target 3 times. You implementation will run
into 3 cache misses. The point is, that 100k iterations are supposed to
be enough for the callsite to be optimized, followed by a setTarget in
the old implementation, causing a deopt, and maybe optimization again,
and so on. This has to compare with the cache misses.

if there is really a penalty in the form you expect it, then the my
proposed version can be changed to also use invokeExact and the we could
compare again.

bye Jochen

Reply via email to