Thanks for your response, will have a look.
Ok, dis() is all that is need to disassemble.

Very cool!

A long term goal could be indeed to have
a Prolog interpreter produce 20MLips, like
SWI-Prolog, but tightly integrated into

Python. So that it directly makes use of
the Python objects and the Python garbage
collection like Dogelog Runtime.

Although Dogelog Runtime has its own
garbage collection, its only used to help
the native Python garbage collection.

The result is that you can enjoy bi-directly
calling Python. For example the Prolog
adding of two numbers is realized as:

###
# +(A, B, C): [ISO 9.1.7]
# The predicate succeeds in C with the sum of A and B.
##
def eval_add(alpha, beta):
    check_number(alpha)
    check_number(beta)
    try:
        return alpha + beta
    except OverflowError:
        raise make_error(Compound("evaluation_error", ["float_overflow"]))

And then register it:

    add("+", 3, make_dispatch(eval_add, MASK_MACH_FUNC))

Could also map the exception to a Prolog term later.
Thats not so much an issue for speed. The sunshine
case is straight forward.

But I might try dis() on eval_add(). Are exceptions
blocks in Python cheap or expensive? Are they like
in Java, some code annotation, or like in Go

programming language pushing some panic handler?

Greg Ewing schrieb:
On 16/09/21 4:23 am, Mostowski Collapse wrote:
I really wonder why my Python implementation
is a factor 40 slower than my JavaScript implementation.

There are Javascript implementations around nowadays that are
blazingly fast. Partly that's because a lot of effort has been
put into them, but it's also because Javascript is a different
language. There are many dynamic aspects to Python that make
fast implementations difficult.

I use in Python:

   temp = [NotImplemented] * code[pos]
   pos += 1

is the the idiom [_] * _ slow?

No, on the contrary it's probably the fastest way to do it
in Python. You could improve it a bit by precomputing
[NotImplemented]:

# once at the module level
NotImplementedList = [NotImplemented]

# whenever you want a new list
temp = NotImplementedList * code[pos]

That's probably at least as fast as built-in function for
creating lists would be.

does it really first create an
array of size 1 and then enlarge it?

It does:

 >>> def f(code, pos):
...  return [NotImplemented] * code[pos]
...
 >>> from dis import dis
 >>> dis(f)
   2           0 LOAD_GLOBAL              0 (NotImplemented)
               2 BUILD_LIST               1
               4 LOAD_FAST                0 (code)
               6 LOAD_FAST                1 (pos)
               8 BINARY_SUBSCR
              10 BINARY_MULTIPLY
              12 RETURN_VALUE

BTW, the Python terminology is "list", not "array".
(There *is* something in the stdlib called an array, but
it's rarely used or needed.)


--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to