On Sun, Nov 12, 2017 at 9:15 PM Petar Maymounkov <pet...@gmail.com> wrote:
> This is an interesting suggestion. But it makes me wonder how this > compares against generating directly a bag of LLVM statically-typed > functions > and trying the intelligence of the LLVM SSA optimizer. Do you know if this > one resorts to global analysis? > > My major concern with LLVM is the same as the Go compiler. Does it do global optimizations well? A whole-program compiler, on the other hand, is forced by construction to be efficient at this problem: either there is no need for whole-program compilation, or you run out of memory, whichever comes first :) Another concern, if you are not doing tail-calls, but rather static calls that really do have a valid return, is that the inliner almost have to be polyvariant (it needs to determine inlining at each call site and not at a global enable/disable level). Some compilers simply disable inlining for any function which is called too often. Some times this is the correct *optimization* as it alleviates Instruction cache pressure. Other times, however, the inlining triggers vast simplifications if it is done, so the overhead in instruction expansion (and thus insn cache pressure) is worth the faster execution. OTOH: modern hardware is excellent at making funcalls fast. Hardware engineers tend to optimize for C-like programs and fix obvious mistakes compiler engineers make :) In short: Make some small experiments. Try to scale them up. Who is fastest on a carry-lookahead adder? -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.