On Tue, 25 Mar 2014 14:57:02 +1100, Chris Angelico wrote: > On Tue, Mar 25, 2014 at 2:43 PM, Rustom Mody <rustompm...@gmail.com> > wrote: >> What you are missing is that programmers spend 90% of their time >> reading code >> 10% writing code >> >> You may well be in the super-whiz category (not being sarcastic here) >> All that will change is upto 70-30. (ecause you rarely make a mistake) >> You still have to read oodles of others' code > > No, I'm not missing that. But the human brain is a tokenizer, just as > Python is. Once you know what a token means, you comprehend it as that > token, and it takes up space in your mind as a single unit. There's not > a lot of readability difference between a one-symbol token and a > one-word token.
Hmmm, I don't know about that. Mathematicians are heavy users of symbols. Why do they write ∀ instead of "for all", or ⊂ instead of "subset"? Why do we write "40" instead of "forty"? > Also, since the human brain works largely with words, I think that's a fairly controversial opinion. The Chinese might have something to say about that. I think that heavy use of symbols is a form of Huffman coding -- common things should be short, and uncommon things longer. Mathematicians tend to be *extremely* specialised, so they're all inventing their own Huffman codings, and the end result is a huge number of (often ambiguous) symbols. Personally, I think that it would be good to start accepting, but not requiring, Unicode in programming languages. We can already write: from math import pi as π Perhaps we should be able to write: setA ⊂ setB -- Steven -- https://mail.python.org/mailman/listinfo/python-list