On 4/1/2018 5:24 PM, David Foster wrote:
My understanding is that the Python interpreter already has enough information 
when bytecode-compiling a .py file to determine which names correspond to local 
variables in functions. That suggests it has enough information to identify all 
valid names in a .py file and in particular to identify which names are not 
valid.

If broken name references were detected at compile time, it would eliminate a 
huge class of errors before running the program: missing imports, call of 
misspelled top-level function, reference to misspelled local variable.

Of course running a full typechecker like mypy would eliminate more errors like 
misspelled method calls, type mismatch errors, etc. But if it is cheap to 
detect a wide variety of name errors at compile time, is there any particular 
reason it is not done?

- David

P.S. Here are some uncommon language features that interfere with identifying 
all valid names. In their absence, one might expect an invalid name to be a 
syntax error:

* import *
* manipulating locals() or globals()
* manipulating a frame object
* eval

The CPython parser and compiler are autogenerated from an LL(1) context-free grammer and other files. Context-dependent rules like the above are for linters and other whole-program analyses. A linter that makes occasional mistakes in its warning can still be useful. A compiler should be perfect.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to