On Fri, Oct 11, 2002 at 05:01:49PM -0400, Dan Sugalski wrote:
> At 9:02 PM +0100 10/11/02, Nicholas Clark wrote:

> >I would like to kill all generated variants of all the 3 argument opcodes
> >where both input arguments are constants. They truly are superfluous.
> 
> Where both operands are ints or nums, I think it's a good idea. I'm 
> less sure in the case where there's a PMC or string involved, as 
> there may be some assumption of runtime behaviour (in the case of 
> constant PMCs that might have some methods overloaded) or strings 
> where the compiler is expecting runtime conversion to happen before 
> whatever gets done.

I think I agree with this reasoning. I was really thinking of the ints
as being easiest to bump off, providing we can be sure that things will
consistently for bytecode compile on a 32 bit parrot, but run by a 64
bit parrot (or likewise for different length floating point)

IIRC C99 states that the pre-processor must do all calculations in its
longest int type, and it's sort of related.
I think we'd need to state that constant folding will be done at compile
time, and will be done at the precision of the compiling parrot.

> >It should make the computed goto core compile more rapidly.
> 
> True, though I'm not hugely worried about this, as it happens only once.

Per user who compiles parrot. The current computed goto code hurts my
desktop at work (128M RAM, x86 linux) and with more ops it will get worse.

It may turn out that gcc improves to the point that it can build
measurably better code for specific CPUs, but distributions/*BSDs require
a lowest common denominator build (typically 486 in the x86 family, isn't
it?)

In which case many people may gain quite a lot by building their own custom
parrot, and they're going to notice the compile time.

I admit this is low down any list of priorities, but it ought to be somewhere.
I find with my work code (the C, not perl related) that gcc3.2 with all the
stops out generates code that was about 5% faster than deadrat's (default
2.96 heresy non-)gcc. And at YAPC::EU someone reported that (IIRC) he'd seen
12% speedup from newer gcc and option tweaking.

And getting even 5% without changing your perl6 code or parrot's code is
nice.


However, the more interesting thing about getting compile times down is you
get more smoke in finite time. (And also developers get more done if they
spend less time waiting for the compiler. BUT EVERYONE SHOULD ALREADY BE
USING ccache AS THAT MAKES REBUILDS AND EDITING COMMENTS DAMN FAST
(unless they can think of good reason not to))

> True. I think reordering is a bigger win, honestly. Lightly used 
> opcode functions won't get swapped in until they're really needed.

More free speedup. I had this crazier idea - experiment with splitting
every parrot function out into its own object file, and see what happens
with permuting the order of all of them.

But I think I need a lot of tuits, and a decent way of doing permutations
with genetic algorithms. (I've got access to a fast machine, but it will
have to stop smoking perl5 for the duration). Although potentially I'll end
up with an order optimised for x86 FreeBSD, which should keep the
Linux vs FreeBSD performance arms race going nicely.

Nicholas Clark
-- 
Even better than the real thing:        http://nms-cgi.sourceforge.net/

Reply via email to