Regarding exploring processor instructions. Lets say you compile a C program targeting x86 architecture, with optimizations turned on for speed, and let the compiler automatic select MMX and SSE instructions for numeric code.
I have now a program that executes very fast, and does what I want very well. Now when I execute it on a x86 processor with the new SSE 4 instructions, it will not matter, because it cannot take advantage of them. With a JIT is different. Assuming that the JIT is also aware of the SSE 4 instructions, it might take advantage of this new set, if for a given instruction sequence it is better to do so. For the usage of the profile guided optimizations, here go a few examples. The JIT might find out that on a given section, the vector indexes are always correct, so no need for bounds verification is needed. Or if the language is a OOP one, it might come to the conclusion that the same virtual method is always called, so there is no need for a VMT lookup before calling the method, thus it replaces the already generated code by a direct call. Or that a small method is called enough times, so it would be better to inline it instead. Here are a few papers about profile guided optimizations: http://rogue.colorado.edu/EPIC6/EPIC6-ember.pdf http://www.cs.princeton.edu/picasso/mats/HotspotOverview.pdf Of course most of these optimizations are only visible in applications that you use for longer that 5m. -- Paulo -- http://mail.python.org/mailman/listinfo/python-list