On 10/29/2012 10:53 PM, Michael Torrie wrote:
On 10/29/2012 01:34 PM, Andrew Robinson wrote:
No, I don't think it big and complicated. I do think it has timing
implications which are undesirable because of how *much* slices are used.
In an embedded target -- I have to optimize; and I will have to reject
certain parts of Python to make it fit and run fast enough to be useful.
Since you can't port the full Python system to your embedded machine
anyway, why not just port a subset of python and modify it to suit your
needs right there in the C code. It would be a fork, yes,
You're exactly right; That's what I *know* I am faced with.
Without a libc, an MMU on the CPU, and a kernel, it's not going to just
compile and run.
I have libc. The MMU is a problem; but the compiler implements the
standard "C" math library; floats, though, instead of doubles. That's
the only problem -- there.
What you want with slicing behavior changes has no
place in the normal cPython implementation, for a lot of reasons. The
main one is that it is already possible to implement what you are
talking about in your own python class, which is a fine solution for a
normal computer with memory and CPU power available.
If the tests I outlined in the previous post inaccurately describe a
major performance improvement and at least a modest code size reduction;
or will *often* introduce bugs -- I *AGREE* with you.
Otherwise, I don't. I don't think wasting extra CPU power is a good
thing -- Extra CPU power can always be used by something else....
I won't belabor the point further. I'd love to see a counter example to
the specific criteria I just provided to IAN -- it would end my quest;
and be a good reference to point others to.
--
http://mail.python.org/mailman/listinfo/python-list