On Tue, Mar 25, 2014 at 2:43 PM, Rustom Mody <rustompm...@gmail.com> wrote: > What you are missing is that programmers spend > 90% of their time reading code > 10% writing code > > You may well be in the super-whiz category (not being sarcastic here) > All that will change is upto 70-30. (ecause you rarely make a mistake) > You still have to read oodles of others' code
No, I'm not missing that. But the human brain is a tokenizer, just as Python is. Once you know what a token means, you comprehend it as that token, and it takes up space in your mind as a single unit. There's not a lot of readability difference between a one-symbol token and a one-word token. Also, since the human brain works largely with words, you're usually going to grok things based on how you would read them aloud: x = y + 1 eggs equals why plus one They take up roughly the same amount of storage space. One of them, being a more compact notation, lends itself well to a superstructure of notation; compare: x += 1 eggs plus-equals one inc eggs You can eyeball the first version and read it as the third, which is a space saving in your brain. But it's not fundamentally different from the second. So the saving from using a one-letter symbol that's read "lambda" rather than the actual word "lambda" is extremely minimal. Unless you can use it in a higher-level construct, which seems unlikely in Python (maybe it's different in Haskell? Maybe you use lambda more and actually do have those supernotations?), you won't really gain anything. ChrisA -- https://mail.python.org/mailman/listinfo/python-list