On Tue, Oct 15, 2013 at 6:18 AM, John Nagle <na...@animats.com> wrote: > No, Python went through the usual design screwups. > Each of [the below] reflects a design error in the type system which > had to be corrected.
I'll pick up each one here as I think some of them need further discussion. > Look at how painful the slow transition to Unicode was, > from just "str" to Unicode strings, ASCII strings, byte strings, byte > arrays, 16 and 31 bit character builds, and finally automatic > switching between rune widths. I'm not sure what you mean by all of these - I've known Python for only a (relatively) short time, wasn't there in the 1.x days (much less the <1.0 days). But according to its history page, the early 1.x versions of Python predate the widespread adoption of Unicode, so it's a little unfair to look with 2013 eyes and say that full true Unicode support should have been there from the start. If anyone invents a language today that doesn't handle Unicode properly, I would be very much disappointed; but changing the meaning of quoted string literals is a pretty major change. I'm just glad it got sorted out for 3.0. As to the 16/32 bit builds, there aren't actually very many languages that get this right; Python's now a blazing torch, showing the way for others to follow. (Pike's had something very similar to PEP 393 for years, but nobody looks to obscurities.) I hope we'll see other languages start to follow suit. > Old-style classes vs. new-style classes. By the time I started using Python, new-style classes existed and were the recommended way to do things, so I never got the "feel" for old-style classes. I assume there was a simplicity to them, since new-style classes were described as having a performance cost, but one worth paying. My guess is it comes under the category of "would have to be omniscient to recognize what would happen"; Steven, maybe you can fill us in? > Adding a > boolean type as an afterthought (that was avoidable; C went through > that painful transition before Python was created). I don't know about that. Some languages get by just fine without dedicated a boolean type. Python didn't have them, then it had them as integers, now it has them as bools. Is it a major problem? (Apart from adding them in a point release. That's an admitted mistake.) Python doesn't have a 'vector' type either, you just use a tuple. Some things don't need to be in the language, they can be pushed off to the standard library. And speaking of which... > Operator "+" as concatenation for built-in arrays but addition > for NumPy arrays. ... NumPy definitely isn't part of the language. It's not even part of the standard library, it's fully third-party. The language decrees that [1,2] + [3,4] = [1,2,3,4], and that custom_object1 + custom_object2 = custom_object1.__add__(custom_object2) more or less, and then leaves the implementation of __add__ up to you. Maybe you'll make an "Entropy" class, where entropy+=int blocks until it's acquired that much more entropy (maybe from /dev/random), and entropy-int returns a random number based on its current state. It makes a measure of sense, if not what you normally would want. You can shoot yourself in the foot in any language; and if you write something as big and popular as NumPy, you get to shoot other people in the foot too! :) ChrisA -- https://mail.python.org/mailman/listinfo/python-list