Hello,

I just wanted to optimize a byte -> index lookup, by using a 256 element table instead of using [1, 2, 3].countUntil(x) and I was amazed what I've found. My solution lookup[x] was not faster at all, because LDC2 amazingly optimized the linear search to a jump table.

Anyone please can tell me, how it is possible?

I looked inside the countUntil() template and I see no static ifs for compile time optimizations.

Is it true that: The compiler notices that there is a while loop on a static compile time array and it is clever enough to solve all the calculations in compile time and generate a completely unrolled loop? Then the optimizer transforms it to the jump table and do it with "jmp rax" ?

If I'm right, this is not just std library magic, it works even with my own template functions. I'm also getting crazy long compile times, so this gotta be the case :D

Thank You in advance.

Reply via email to