New submission from Peter Ludemann :
As far as I can tell, the lib2to3/Grammar.txt file in the Python 3.8 release is
the same as that of the Python 3.7 release, which means it doesn't have the
"walrus" operator and the "/" parameter syntax.
--
components: 2t
Peter Ludemann added the comment:
Re: breakage due to changes in structure
(https://bugs.python.org/issue36541#msg339669) ... this has already happened in
the past (e.g., type annotations and async).
It's probably a good idea to add some documentation that structure changes ca
Peter Ludemann added the comment:
Should I just close this? (I didn't find https://bugs.python.org/issue36541
when I searched, possibly because I used "2to3" instead of "lib2to3" in my
search.)
--
___
Python tracker
Peter Ludemann added the comment:
Also the Grammar.txt diffs look about the same size as I've seen with other
upgrades to lib2to3 when the Python grammar changed.
--
___
Python tracker
<https://bugs.python.org/is
Peter Ludemann added the comment:
issue36541 and its proposed PR seem to cover my needs.
--
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/i
New submission from Peter Ludemann :
In general, 'utf8' and 'utf-8' are interchangeable in the codecs (and in many
parts of the Python library). However, 'utf8-sig' is missing ... and it happens
to also be generated by lib2to3.tokenize.detec
Peter Ludemann added the comment:
lib2to3.tokenize should allow 'utf8' and 'utf-8' interchangeably, to be
consistent with the rest of the Python library (I looked through the library
source, and there seems to be no consistent preference, and also many (but not
all) che
New submission from Peter Ludemann :
In general, 'utf8' and 'utf-8' are interchangeable in the codecs (and in many
parts of the Python library). However, 'utf8-sig' is missing ... and it happens
to also be generated by lib2to3.tokenize.detec
Peter Ludemann added the comment:
(oops -- updated this bug instead of submitting a new one)
See also https://bugs.python.org/issue39155
--
___
Python tracker
<https://bugs.python.org/issue39
Peter Ludemann added the comment:
To clarify and fix a typo ... lib2to3.pgen2.tokenize.detect_encoding checks for
'utf-8'(and 'utf_8') but not 'utf8' in various places. Similarly for 'latin-1'
and 'latin1'. (The codecs documentation page al
Change by Peter Ludemann :
--
nosy: +Peter Ludemann
___
Python tracker
<https://bugs.python.org/issue40360>
___
___
Python-bugs-list mailing list
Unsubscribe:
Peter Ludemann added the comment:
The documentation change gives two possible successors:
https://libcst.readthedocs.io/ (https://github.com/Instagram/LibCST)
https://parso.readthedocs.io/
And I've also seen this mentioned: https://github.com/pyga/awpa
Is it possible to settle on o
Peter Ludemann added the comment:
I made a suggestion for augmenting ast.parse with some of lib2to3's features;
but nobody seemed interested.
RIP lib2to3. Like many pieces of software, it was used for far more than for
what it was originally intended.
https://mail.python.org/archives
Peter Ludemann added the comment:
Every piece of code that uses either lib2to3 or a parser derived from it
(including parso and LibCST) will eventually not be able to upgrade the parser
because PEG can handle grammars that LL(k) can't. That's why I proposed adding
some functi
Peter Ludemann added the comment:
Looking at the suggested successor tools (redbaron, libCST, parso, awpa) ...
all of them appear to use some variant of pgen2. But at some point Python will
be using a PEG approach (PEP 617), and therefor the pgen2 approach apparently
won't work.
Peter Ludemann added the comment:
I've written up a proposal for adding "whitespace" handling to the ast module:
https://mail.python.org/archives/list/python-id...@python.org/thread/X2HJ6I6XLIGRZDB27HRHIVQC3RXNZAY4/
I don't think it's a "summer-of-code-sized proj
Peter Ludemann added the comment:
Yes, I'm thinking of doing this as a wrapper, in such a way that it could be
incorporated into Lib/ast.py eventually. (Also, any lib2to3-ish capabilities
would probably not be suitable for inclusion in the stdlib, at least not
initially ... but I ha
17 matches
Mail list logo