Neil Schemenauer <nas-pyt...@arctrix.com> added the comment:

Just a comment on what I guess is the intended use of literal_eval(), i.e. 
taking a potentially untrusted string and turning it into a Python object.  
Exposing the whole of the Python parser to potential attackers would make me 
very worried.  Parsing code for all of Python syntax is just going to be very 
complicated and there can easily be bugs there.  Generating an AST and then 
walking over it to see if it is safe is also scary.  The "attack surface" is 
too large.  This is similar to the Shellshock bug. If you can trust the 
supplier of the string then okay but I would guess that literal_eval() is going 
to get used for untrusted inputs.

It would be really nice to have something like ast.literal_eval() that could be 
used for untrusted strings.  I would implement it by writing a retricted 
parser.  Keep it extremely simple.  Validate it by heavy code reviews and 
extensive testing (e.g. fuzzing).

----------
nosy: +nascheme

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31778>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to