On Thursday, February 12, 2015 at 11:59:55 PM UTC+5:30, John Ladasky wrote:
> On Thursday, February 12, 2015 at 3:08:10 AM UTC-8, Fabien wrote:
> 
> > ... what a coincidence then that a huge majority of scientists 
> > (including me) dont care AT ALL about unicode. But since scientists are 
> > not paid to rewrite old code, the scientific world is still stuck to 
> > python 2.
> 
> I'm a scientist.  I'm a happy Python 3 user who migrated from Python 2 about 
> two years ago.
> 
> And I use Unicode in my Python.  In implementing some mathematical models 
> which have variables like delta, gamma, and theta, I decided that I didn't 
> like the line lengths I was getting with such variable names.  I'm using δ, 
> γ, and θ instead.  It works fine, at least on my Ubuntu Linux system (and 
> what scientist doesn't use Linux?).  I also have special mathematical 
> symbols, superscripted numbers, etc. in my program comments.  It's easier to 
> read 2x³ + 3x² than 2*x**3 + 3*x**2.
> 
> I am teaching someone Python who is having a few problems with Unicode on his 
> Windows 7 machine.  It would appear that Windows shipped with a 
> less-than-complete Unicode font for its command shell.  But that's not 
> Python's fault.

Haskell is a bit ahead of python in this respect:

Prelude> let (x₁ , x₂) = (1,2)
Prelude> (x₁ , x₂)
(1,2)

>>> (x₁ , x₂) = (1,2)
  File "<stdin>", line 1
    (x₁ , x₂) = (1,2)
      ^
SyntaxError: invalid character in identifier

But python is ahead in another (arguably more) important aspect:
Haskell gets confused by ligatures in identifiers; python gets them right

>>> flag = 1 
>>> flag
1

Prelude> let flag = 1 
Prelude> flag

<interactive>:4:1: Not in scope: `flag'

Hopefully python will widen its identifier-chars also



-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to