On Mon, 14 Oct 2013 12:18:59 -0700, John Nagle wrote: > No, Python went through the usual design screwups. Look at how > painful the slow transition to Unicode was, from just "str" to Unicode > strings, ASCII strings, byte strings, byte arrays, 16 and 31 bit > character builds, and finally automatic switching between rune widths.
Are you suggesting that Guido van Rossum wasn't omniscient back in 1991 when he first released Python??? OH MY GOD!!! You ought to blog about this, let the world know!!!! But seriously... although the Unicode standard was began as early as 1987, the first official release of the standard wasn't until nine months after the first public release of Python. Do you really consider it a "design screwup" that Guido didn't build support for Unicode into Python since the beginning? Given the constraints of backwards-compatibility, and that Unicode didn't even exist when Python was first created, I don't think the history of Unicode support in Python is a screw-up in the least. And if it is a screw-up, it's *Unicode's* screw-up, because they're the ones that thought that 16-bit chars would have been enough in the first place. While it would have been nice if Python had invented the idea of using different rune widths back in Python 2.2, I don't think we can hold it against GvR or the other Python devs that they didn't. They're only human. As far as I know, only one other language does such a thing, namely Pike, which is not exactly high-profile. > Old-style classes vs. new-style classes. Adding a boolean type as an > afterthought (that was avoidable; C went through that painful transition > before Python was created). Operator "+" as concatenation for > built-in arrays but addition for NumPy arrays. > > Each of those reflects a design error in the type system which > had to be corrected. Perhaps the first one -- had GvR not decided in the first place that built-in types should be separate from user-defined classes, the old vs new style class thing would have been unnecessary. But bools are not an example. The decision to leave out bools as a separate type was, and remains, a perfectly legitimate decision. Perhaps one might argue that Python-with-bools is *better* than Python-without-bools, but we would be foolish to argue that Python-without-bools was a screw-up. Bools are a nice-to-have, not a must-have. And as for numpy arrays, well, if a long-standing Python developer such as yourself doesn't yet understand that this is a feature, not a mistake, there's a serious problem, and it's not with Python. Operator overloading exists precisely so that custom classes aren't limited to the exact same behaviour as built-ins. The fact that the numpy devs made a different decision as to what + means than the Python devs is not a sign that the design was screwed up, it is a sign that the system works. It is true that numpy has a problem with Python operators in that there aren't enough of them. There have been various attempts to work out a good syntax for adding arbitrary additional operators, so that numpy can have *both* element-wise operators and array-wise operators at the same time. But the lack of this is not a design screw-up. It's a hard problem to solve, and sometimes it is better to do without a feature than to add it poorly. > The type system is now in good shape. The next step is to > make Python fast. Whenever I see somebody describing a *language* as "fast" or "slow", especially when the next few sentence reveals that they are aware of the existence of multiple *implementations*: > Python objects have dynamic operations suited to a > naive interpreter like CPython. [...] That's > part of why Unladen Swallow failed and why PyPy development is so slow. as if "fast" and "slow" were objective, concrete and most importantly *fixed* standards that are the same for everybody, then I suspect trolling. Or to put it another way: Python is already fast. Using PyPy, you can write pure-Python code that is faster than the equivalent optimized C code compiled using gcc. Even using vanilla CPython, you can write pure Python code that (for example) checks over 12,000 nine-digit integers for primality per second, on a relatively old and slow computer. If that's not *fast*, nothing is. Whether it is *fast enough* is a completely different question, and one which leads to the question "fast enough for what?". But people who like to complain about "Python being slow" don't like that question. -- Steven -- https://mail.python.org/mailman/listinfo/python-list