On 10/29/2012 01:34 PM, Andrew Robinson wrote: > No, I don't think it big and complicated. I do think it has timing > implications which are undesirable because of how *much* slices are used. > In an embedded target -- I have to optimize; and I will have to reject > certain parts of Python to make it fit and run fast enough to be useful.
Since you can't port the full Python system to your embedded machine anyway, why not just port a subset of python and modify it to suit your needs right there in the C code. It would be a fork, yes, but any port to this target will be a fork anyway. No matter how you cut it, it won't be easy at all, and won't be easy to maintain. You'll basically be writing your own implementation of Python (that's what python-on-a-chip is, and that's why it's closer to Python 2.x than Python 3). That's totally fine, though. I get the impression you think you will be able to port cPython as is to your target. Without a libc, an MMU on the CPU, and a kernel, it's not going to just compile and run. Anyway, the only solution, given your constraints, is to implement your own python interpreter to handle a subset of Python, and modify it to suit your tastes. What you want with slicing behavior changes has no place in the normal cPython implementation, for a lot of reasons. The main one is that it is already possible to implement what you are talking about in your own python class, which is a fine solution for a normal computer with memory and CPU power available. -- http://mail.python.org/mailman/listinfo/python-list