IronPython doesn't have an interpreter loop and therefore has no POP / TOP / 
etc...   Instead what IronPython has is a method call Int32Ops.Add which looks 
like:

        public static object Add(Int32 x, Int32 y) {
            long result = (long) x + y;
            if (Int32.MinValue <= result && result <= Int32.MaxValue) {
                return 
Microsoft.Scripting.Runtime.RuntimeHelpers.Int32ToObject((Int32)(result));
            }
            return BigIntegerOps.Add((BigInteger)x, (BigInteger)y);
        }

This is the implementation of int.__add__.  Note that calling int.__add__ can 
actually return NotImplemented and that's handled by the method binder looking 
at the strong typing defined on Add's signature here - and then automatically 
generating the NotImplemented result when the arguments aren't ints.  So that's 
why you don't see that here even though it's the full implementation of 
int.__add__.

Ok, next if you define a function like:

def adder(a, b):
        return a + b

this turns into a .NET method, which will get JITed, which in C# would look 
something like like:

static object adder(object a, object b) {
    return $addSite.Invoke(a, b)
}

where $addSite is a dynamically updated call site.

$addSite knows that it's performing addition and knows how to do nothing other 
than update the call site the 1st time it's invoked.  $addSite is local to the 
function so if you define another function doing addition it'll have its own 
site instance.

So the 1st thing the call site does is a call back into the IronPython runtime 
which starts looking at a & b to figure out what to do.  Python defines that as 
try __add__, maybe try __radd__, handle coercion, etc...  So we go looking 
through finding the __add__ method - if that can return NotImplemented then we 
find the __radd__ method, etc...  In this case we're just adding two integers 
and we know that the implementation of Add() won't return NotImplemented - so 
there's no need to call __radd__.  We know we don't have to worry about 
NotImplemented because the Add method doesn't have the .NET attribute 
indicating it can return NotImplemented.

At this point we need to do two things.  We need to generate the test which is 
going to see if future arguments are applicable to what we just figured out and 
then we need to generate the code which is actually going to handle this.  That 
gets combined together into the new call site delegate and it'll look something 
like:

static void CallSiteStub(CallSite site, object a, object b) {
        if (a != null && a.GetType() == typeof(int) && b != null && b.GetType() 
== typeof(int)) {
            return IntOps.Add((int)a, (int)b);
        }
        return site.UpdateBindingAndInvoke(a, b);
}

That gets compiled down as a lightweight dynamic method which also gets JITed.  
The next time through the call site's Invoke body will be this method and 
things will go really fast if we have int's again.  Also notice this is looking 
an awful lot like the inlined/fast-path(?) code dealing with int's that you 
quoted.  If everything was awesome (currently it's not for a couple of reasons) 
the JIT would even inline the IntOps.Add call and it'd probably be near 
identical.  And everything would be running native on the CPU.

So that's how 2 + 2 works...  Finally if it's a user type then we'd generate a 
more complicated test like (and getting more and more pseudo code to keep 
things simple):

if (PythonOps.CheckTypeVersion(a, 42) && PythonOps.CheckTypeVersion(b, 42)) {
    return $callSite.Invoke(__cachedAddSlot__.__get__(a), b);
}

Here $callSite is another stub which will handle doing optimal dispatch to 
whatever __add__.__get__ will return.  It could be a Python type, it could be a 
user defined function, it could be the Python built-in sum function, etc...  so 
that's the reason for the extra dynamic dispatch.

So in summary: everything is compiled to IL.  At runtime we have lots of stubs 
all over the place which do the work to figure out the dynamic operation and 
then cache the result of that calculation.

Also what I've just described is how IronPython 2.0 works.  IronPython 1.0 is 
basically the same but mostly w/o the stubs and where we use stub methods 
they're much less sophisticated.

Also, IronPython is open source - www.codeplex.com/IronPython

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of castironpi
Sent: Tuesday, July 29, 2008 9:20 PM
To: python-list@python.org
Subject: Re: interpreter vs. compiled

On Jul 29, 7:39 am, alex23 <[EMAIL PROTECTED]> wrote:
> On Jul 29, 2:21 pm, castironpi <[EMAIL PROTECTED]> wrote:
>
> > On Jul 28, 5:58 pm, Fuzzyman <[EMAIL PROTECTED]> wrote:
> > > Well - in IronPython user code gets compiled to in memory assemblies
> > > which can be JIT'ed.
>
> > I don't believe so.
>
> Uh, you're questioning someone who is not only co-author of a book on
> IronPython, but also a developer on one of the first IronPython-based
> commercial applications.
>
> I know authorship isn't always a guarantee of correctness, but what
> experience do you have with IronPython that makes you so unwilling to
> accept the opinion of someone with substantial knowledge of the
> subject?

None, no experience, no authority, only the stated premises &
classifications, which I am generally tending to misinterpret.  I'm
overstepping my bounds and trying to do it politely.  (Some might call
it learning, which yes, though uncustomary, *requires questioning
authorities*, or reinventing.)

Evidently, I have a "fundamental misunderstanding of the compilation
process", which I'm trying to correct by stating what I believe.  I'm
trying to elaborate, and I'm meeting with increasingly much detail.
So, perhaps I'll learn something out of this.  Until then...

What I know I have is two conflicting, contradictory, inconsistent
beliefs.  Maybe I've spent too much time in Python to imagine how a
dynamic language can compile.

This is from 7/22/08, same author:
> I wouldn't say "can't".  The current CPython VM does not compile
> code.  It COULD.  The C#/.NET VM does.

Three big claims here that I breezed right over and didn't believe.

> It COULD.

I'm evidently assuming that if it could, it would.

> The current CPython VM does not compile code.

Therefore it couldn't, or the assumption is wrong.  Tim says it is.
And the glaring one--

WHY NOT?  Why doesn't CPython do it?

>From 7/18/08, own author:
>>
#define TOP()           (stack_pointer[-1])
#define BASIC_POP()     (*--stack_pointer)

...(line 1159)...
w = POP();
v = TOP();
if (PyInt_CheckExact(v) && PyInt_CheckExact(w)) {
        /* INLINE: int + int */
        register long a, b, i;
        a = PyInt_AS_LONG(v);
        b = PyInt_AS_LONG(w);
        i = a + b;
<<

I am imagining that every Python implementation has something like
it.  If IronPython does not, in particular, not have the 'POP();
TOP();' sequence, then it isn't running on a stack machine.  Is the
IronPython code open source, and can someone link to it?  I'm not
wading through it from scratch.  What does it have instead?  Does
dynamic typing still work?

<closing hostile remark>
If you're bluffing, bluff harder; I call.  If you're not, I apologize;
teach me something.  If you can ask better, teach me that too.
</hostile>
--
http://mail.python.org/mailman/listinfo/python-list

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to