On Mar 30, 2006, at 7:55 AM, Camm Maguire wrote:
Longer term, it would be nice to have someone from your camp layout where the time is spent and what changes might be worth while in gcc to make it more suitable for that type of work.

This would be interesting, how does one benchmark gcc performance in this way?

Well, the types of things I had in mind include things like relaying out the compiler so that the startup costs are reduced, if those costs impact you to a greater extent than others, deciding which optimizations are too expensive for the types of code you put through the compiler, and turning them off (or throttling them down or moving them out to O3) or even pulling in some from O2, maybe having a special compiler that excludes some of the unneeded optimizations to reduce the memory footprint. One way to think of this would be -Ojit and then tune that to turn on and off all the various things. There are many possibilities that come to mind, and I wouldn't presume to know just what types of tweaks would be best.

As to how to go about tuning it, that doesn't have a 1 paragraph answer. Startup costs are easy enough, gcov or Shark an empty program and then examine all the large numbers and see if there is a way to move those items to compiler build time. For other costs, examine gcc against an existing good JIT system and try and figure out where the time is going and ways to reduce it. Having a JIT expert examine gcc would probably be the way to go.

This was the most promising. If I could run gcc as a pipe with assembler only output, all I need is a 'flush instruction on stdin to get the assemly of the function(s) input thus far.

I suspect we could add a fflush after each function... I don't think we presently do.

Reply via email to