Matthew Rahtz added the comment:
Ok, https://github.com/python/cpython/pull/32341/files is a reference of how
the current implementation behaves. Fwiw, it *is* mostly correct - with a few
minor tweaks it might be alright for at least the 3.11 release.
In particular, instead of dealing with
Change by Matthew Rahtz :
--
keywords: +patch
pull_requests: +30396
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/32341
___
Python tracker
<https://bugs.python.org/issu
Matthew added the comment:
> Probably there was also shadowing involved, since the built-in module doesn't
> try to load anything else. Would be nice to know for sure (@Matthew) to make
> sure we don't have some other issue here, but you're right, I don't see
Matthew Rahtz added the comment:
[Guido]
> 1. Some edge case seems to be that if *tuple[...] is involved on either side
> we will never simplify.
Alright, let me think this through with some examples to get my head round it.
It would prohibit the following difficult case:
class C(G
Matthew Rahtz added the comment:
Apologies for the slow reply - coming back to this now that the docs and
pickling issues are mostly sorted.
[Serhiy]
> > Alias = C[T, *Ts]
> > Alias2 = Alias[*tuple[int, ...]]
> > # Alias2 should be C[int, *tuple[int, ...]]
>
> tuple
Matthew added the comment:
Hello,
Thanks for all the help people have given me! I've found the solution to my
problem. The Environment Variable was set below every other, leading to a
different Python interpreter to being used, which was probably bundled with a
different software. I
Matthew Rahtz added the comment:
> 1. Finish writing docs
Done once https://github.com/python/cpython/pull/32103 is merged.
> 2. Implement support for pickling of unpacked native tuples
Done once https://github.com/python/cpython/pull/32159 is merged.
4. Resolve the issue of
Matthew Barnett added the comment:
For reference, I also implemented .regs in the regex module for compatibility,
but I've never used it myself. I had to do some investigating to find out what
it did!
It returns a tuple of the spans of the groups.
Perhaps I might have used it if it d
Change by Matthew Rahtz :
--
pull_requests: +30237
pull_request: https://github.com/python/cpython/pull/32159
___
Python tracker
<https://bugs.python.org/issue43
Matthew Rahtz added the comment:
Since things are piling up, here's a quick record of what I think the remaining
tasks are: (in approximate order of priority)
1. Finish writing docs (is updating library/typing.html sufficient?
https://github.com/python/cpython/pull/32103)
2. Impl
Change by Matthew Rahtz :
--
pull_requests: +30197
pull_request: https://github.com/python/cpython/pull/32119
___
Python tracker
<https://bugs.python.org/issue43
Change by Matthew Rahtz :
--
keywords: +patch
pull_requests: +30183
stage: needs patch -> patch review
pull_request: https://github.com/python/cpython/pull/32103
___
Python tracker
<https://bugs.python.org/issu
Matthew Rahtz added the comment:
Ooh, thanks for the reminder! I'll start drafting this now.
--
nosy: +matthew.rahtz
___
Python tracker
<https://bugs.python.org/is
Matthew Rahtz added the comment:
P.s. To be clear, (I think?) these are all substitutions that are computable.
We *could* implement the logic to make all these evaluate correctly if we
wanted to. It's just a matter of how much complexity we want to allow in
typing.py (or in the runti
Matthew Rahtz added the comment:
[Guido]
> What would be an example of a substitution that's too complex to do?
We also need to remember the dreaded arbitrary-length tuple. For example, I
think it should be the case that:
```python
T = TypeVar('T')
Ts = TypeVarTuple(
Matthew Barnett added the comment:
I don't think it's a typo, and you could argue the case for "qualifiers", but I
still agree with the proposal as it's a more meaningful term in the context.
--
___
Python tracker
Matthew Barnett added the comment:
I'd just like to point out that to a user it could _look_ like a bug, that an
error occurred while reporting, because the traceback isn't giving a 'clean'
report; the stuff about the KeyError i
Matthew Rahtz added the comment:
(Having said that, to be clear: my preferred solution currently would still be
the solution where we just return a new GenericAlias for anything involving a
TypeVarTuple. The crux is what Serhiy is happy with
Matthew Rahtz added the comment:
Thanks for starting this, Jelle - I was a bit unsure about how to proceed here.
Given that https://github.com/python/cpython/pull/31800 is already merged, I'd
also propose something halfway between the two extremes: return a sensible
substitution whe
Change by Matthew Rahtz :
--
pull_requests: +29945
pull_request: https://github.com/python/cpython/pull/31846
___
Python tracker
<https://bugs.python.org/issue43
Change by Matthew Rahtz :
--
pull_requests: +29944
pull_request: https://github.com/python/cpython/pull/31845
___
Python tracker
<https://bugs.python.org/issue43
Change by Matthew Rahtz :
--
pull_requests: +29943
pull_request: https://github.com/python/cpython/pull/31844
___
Python tracker
<https://bugs.python.org/issue43
Change by Matthew Rahtz :
--
pull_requests: +29905
pull_request: https://github.com/python/cpython/pull/31804
___
Python tracker
<https://bugs.python.org/issue43
Matthew Barnett added the comment:
The expression is a repeated alternative where the first alternative is a
repeat. Repeated repeats can result in a lot of attempts and backtracking and
should be avoided.
Try this instead:
(0|1(01*0)*1
Matthew Barnett added the comment:
That pattern has:
(?P[^]]+)+
Is that intentional? It looks wrong to me.
--
___
Python tracker
<https://bugs.python.org/issue46
Change by Matthew Suozzo :
--
pull_requests: +29275
pull_request: https://github.com/python/cpython/pull/31090
___
Python tracker
<https://bugs.python.org/issue43
Matthew Stidham added the comment:
the problem was a file in our library screwing up the python configuration
--
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/i
New submission from Matthew Stidham :
The file which I found the error in is in
https://github.com/greearb/lanforge-scripts
--
components: C API
files: debug from pandas failure.txt
messages: 412400
nosy: matthewstidham
priority: normal
severity: normal
status: open
title: CSV
Matthew Davis added the comment:
In addition to fixing any unexpected behavior, can we update the documentation
[1] to state what the expected behavior is in terms of thread safety?
[1] https://docs.python.org/3/library/zipfile.html
--
nosy: +mdavis-xyz
New submission from Matthew Rahtz :
There's currently not much documentation in `typing.py` for `_GenericAlias`.
Some fairly weird things go on in there, so it would be great to have more info
in the class about what's going on and why various edge cases are necessary.
--
Change by Matthew Rahtz :
--
pull_requests: +29202
pull_request: https://github.com/python/cpython/pull/31021
___
Python tracker
<https://bugs.python.org/issue43
Change by Matthew Rahtz :
--
pull_requests: +29200
pull_request: https://github.com/python/cpython/pull/31019
___
Python tracker
<https://bugs.python.org/issue43
Change by Matthew Rahtz :
--
pull_requests: +29199
pull_request: https://github.com/python/cpython/pull/31018
___
Python tracker
<https://bugs.python.org/issue43
Change by Matthew Barnett :
--
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/issue46515>
___
___
Python-bugs-list
Matthew Barnett added the comment:
They're not supported in string literals either:
Python 3.10.1 (tags/v3.10.1:2cd268a, Dec 6 2021, 19:10:37) [MSC v.1929 64 bit
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more inf
Change by Matthew Rahtz :
--
pull_requests: +28607
pull_request: https://github.com/python/cpython/pull/30398
___
Python tracker
<https://bugs.python.org/issue43
Change by Matthew Rahtz :
--
components: +Parser, Tests
nosy: +lys.nikolaou, pablogsal
title: Add support for PEP 646 (Variadic Generics) to typing.py -> Add support
for PEP 646
versions: +Python 3.11 -Python 3.10
___
Python tracker
<
New submission from Matthew H. McKenzie :
A mailbox (folder) need not be for a recipient and need not be the private part
of an RFC2822 address.
Passing a value of "000 Bookings" to select() results in validation issues when
the tokens are parsed as arguments and there are too
Matthew Barnett added the comment:
It's not just in the 'if' clause:
>>> class Foo:
... a = ['a', 'b']
... b = ['b', 'c']
... c = [b for x in a]
...
Traceback (most recent call last):
File "", line 1, i
Matthew Barnett added the comment:
For comparison, the regex module says that 0x1C..0x1F aren't whitespace, and
the Unicode property White_Space ("\p{White_Space}" in a pattern, where
supported) also says that they ar
Matthew H. McKenzie added the comment:
To answer your original questions : Linux Host and Client, amd MVS (EBCDIC
records) to Linux.
hacks to overcome (in libftp):
def print_line(line):
'''Default retrlines callback to print a line.'''
print(line, end
Matthew H. McKenzie added the comment:
On the face of it it is my mistake for using the write method for my file. But
read on.
your write_line() adds an EOL, OK, because it wraps print().
So the retrlines() function strips them in anticipation?
The error is arguably in my own code as I
New submission from Matthew H. McKenzie :
Lib/ftplib.py function retrlines
Inspired by documentation the following writes a file without line-endings:
from ftplib import FTP
ftp=FTP()
ftp.connect('hostname')
ftp.login('user','')
ftp.sendcmd('pasv
Matthew Barnett added the comment:
It's definitely a bug.
In order for the pattern to match, the negative lookaround must match, which
means that its subexpression mustn't match, so none of the groups in that
subexpression have captured.
--
versions: +P
Matthew Barnett added the comment:
It can be shortened to this:
buffer = b"a" * 8191 + b"\\r\\n"
with open("bug_csv.csv", "wb") as f:
f.write(buffer)
with open("bug_csv.csv", encoding="unicode_escape", newline="") as
Matthew Barnett added the comment:
I wonder whether there should be a couple of other endianness values, namely,
"native" and "network", for those cases where you want to be explicit about it.
If you use "big" it's not clear whether that's because you
Matthew Barnett added the comment:
I'd probably say "In the face of ambiguity, refuse the temptation to guess".
As there's disagreement about the 'correct' default, make it None and require
either "big" or "little" if lengt
Matthew Kenigsberg added the comment:
I think it would be nice to have a from str method that could reverse the
behavior of str(timedelta). I'm trying to parse files that contain the output
of str(timedelta), and there's no easy way to get them back to timedelta's
Change by Matthew Morrissette Vance :
--
nosy: +yinzara
___
Python tracker
<https://bugs.python.org/issue39658>
___
___
Python-bugs-list mailing list
Unsub
New submission from Matthew Zielinski :
The Manual for python 3.9.6 always says there are no entries. I searched things
it definitely had, like modules, but it said there were no topics found.
--
components: Library (Lib)
files: Python Error.png
messages: 398261
nosy: matthman2019
Matthew Barnett added the comment:
It's called "catastrophic backtracking". Think of the number of ways it could
match, say, 4 characters: 4, 3+1, 2+2, 2+1+1, 1+3, 1+2+1, 1+1+2, 1+1+1+1. Now
try 5 characters...
--
___
Python
Matthew Clapp added the comment:
To clarify my intent: I'd really love a way to get the paths info from context
from an existing native venv without affecting the directories of the venv. It
seems like this is what ensure_directories *actually* does if clear==False.
I'm hoping
Change by Matthew Clapp :
--
keywords: +patch
pull_requests: +25250
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/26663
___
Python tracker
<https://bugs.python.org/issu
New submission from Matthew Clapp :
The docs for the venv module, EnvBuilder class, ensure_directories method,
describe behavior that doesn't match what its actual behavior is, (and what the
code is). I propose to update the documentation of the API to match the actual
behavior.
Matthew Barnett added the comment:
I've only just realised that the test cases don't cover all eventualities: none
of them test what happens with multiple spaces _between_ the letters, such as:
' a b c '.split(maxsplit=1) == ['a', 'b c ']
Com
Matthew Barnett added the comment:
We have that already, although it's spelled:
' x y z'.split(maxsplit=1) == ['x', 'y z']
because the keepempty option doesn't exist yet.
--
___
Python trac
Matthew Barnett added the comment:
The best way to think of it is that .split() is like .split(' '), except that
it's splitting on any whitespace character instead of just ' ', and keepempty
is defaulting to False instead of True.
Therefore:
' x y z
Matthew Barnett added the comment:
The case:
' a b c '.split(maxsplit=1) == ['a', 'b c ']
suggests that empty strings don't count towards maxsplit, otherwise it would
return [' a b c '] (i.e. the split would give ['', ' a
Change by Matthew Suozzo :
--
pull_requests: +24081
stage: needs patch -> patch review
pull_request: https://github.com/python/cpython/pull/25347
___
Python tracker
<https://bugs.python.org/issu
Matthew Suozzo added the comment:
I don't think this was actually fixed for the create_autospec case.
create_autospec still uses the only is_async_func check to enable use of
AsyncMock and that still does a __code__ check.
There was a test submitted to check this case but the test i
Change by Matthew Suozzo :
--
keywords: +patch
nosy: +matthew.suozzo
nosy_count: 7.0 -> 8.0
pull_requests: +24061
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/25326
___
Python tracker
<https://bugs.p
Matthew Suozzo added the comment:
Ah and one other question: Is this normally the sort of thing that would get
backported? It should be very straightforward to do so, at least for 3.9 given
the support for the new parser.
--
versions: -Python 3.6, Python 3.7, Python 3.8
Change by Matthew Suozzo :
--
keywords: +patch
pull_requests: +24059
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/25324
___
Python tracker
<https://bugs.python.org/issu
New submission from Matthew Suozzo :
Given the increasing use of long `from typing import foo, bar, ...` import
sequences, it's becoming more desirable to address individual components of the
import node. Unfortunately, the ast.alias node doesn't contain source location
metadata (e
Matthew Barnett added the comment:
Do any other regex implementations behave the way you want?
In my experience, there's no single "correct" way for a regex to behave;
different implementations might give slightly different results, so if the most
common ones behave a ce
Matthew Barnett added the comment:
I'm also -1, for the same reason as Serhiy gave. However, if it was opt-in,
then I'd be OK with it.
--
nosy: +mrabarnett
___
Python tracker
<https://bugs.python.o
Matthew Suozzo added the comment:
And to give some context for the above autospec child bit, this is the relevant
code that determines the spec to use for each child:
https://github.com/python/cpython/blob/master/Lib/unittest/mock.py#L2671-L2696
Matthew Suozzo added the comment:
A few more things:
Assertions on Mock-autospec'ed Mocks will silently pass since e.g.
assert_called_once_with will now be mocked out. This may justify a more
stringent stance on the pattern since it risks hiding real test failures.
One complicating f
Matthew Suozzo added the comment:
I've fixed a bunch of these in our internal repo so I'd be happy to add that to
a patch implementing raising exceptions for these cases.
--
___
Python tracker
<https://bugs.python.o
Change by Matthew Hughes :
--
keywords: +patch
pull_requests: +23612
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/24851
___
Python tracker
<https://bugs.python.org/issu
New submission from Matthew Suozzo :
An unfortunately common pattern over large codebases of Python tests is for
spec'd Mock instances to be provided with Mock objects as their specs. This
gives the false sense that a spec constraint is being applied when, in fact,
nothing will be disal
New submission from Matthew Woodcraft :
The documentation for json.load() and json.loads() says:
« If the data being deserialized is not a valid JSON document, a
JSONDecodeError will be raised. »
But this is not currently entirely true: if the data is provided in bytes form
and is not
New submission from Matthew Hughes :
Just a small thing in these docs, there is a mix of "#include
", e.g.
https://github.com/python/cpython/blame/master/Doc/extending/newtypes_tutorial.rst#L243
and '#include "structmember.h"', mostly in the included samples e
Change by Matthew Rahtz :
--
keywords: +patch
nosy: +matthew.rahtz
nosy_count: 1.0 -> 2.0
pull_requests: +23313
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/24527
___
Python tracker
<https://bugs.p
Change by Matthew Rahtz :
--
components: Library (Lib)
nosy: mrahtz
priority: normal
severity: normal
status: open
title: Add support for PEP 646 (Variadic Generics) to typing.py
versions: Python 3.10
___
Python tracker
<https://bugs.python.
Matthew Barnett added the comment:
Sorry to bikeshed, but I think it would be clearer to keep the version next to
the "python" and the "setup" at the end:
python-3.10.0a5-win32-setup.exe
python-3.10.0a5-win64-setup.exe
Matthew Barnett added the comment:
Example 1:
((a)|b\2)*
^^^ Group 2
((a)|b\2)*
^^ Reference to group 2
The reference refers backwards to the group.
Example 2:
(b\2|(a))*
^^^ Group 2
(b\2|(a))*
^^ Reference to group 2
Matthew Barnett added the comment:
It's not a crash. It's complaining that you're referring to group 2 before
defining it. The re module doesn't support forward references to groups, but
only backward references to them.
--
__
Matthew Barnett added the comment:
In a regex, putting a backslash before any character that's not an ASCII-range
letter or digit makes it a literal. re.escape doesn't special-case control
characters. Its purpose is to make a string that might contain metacharacters
into on
Matthew Suozzo added the comment:
One of the problems with my proposed solution that I glossed over was how and
where to count the primitive call. If the primitive call is only registered on
RETURN (i.e. after all yields), a generator that is incompletely exhausted
would have 0 primitive
New submission from Matthew Walker :
It would be very useful if the documentation for Python's Wave module mentioned
that 8-bit samples must be unsigned while 16-bit samples must be signed.
See the Wikipedia article on the WAV format: "There are some inconsistencies in
the WAV f
New submission from Matthew Suozzo :
# Issue
When profiling a generator function, the initial call and all subsequent yields
are aggregated into the same "ncalls" metric by cProfile.
## Example
>>> cProfile.run("""
... def foo():
... yield 1
... yield
Matthew added the comment:
Let me preface this by declaring that I am very new to Python async so it is
very possible that I am missing something seemingly obvious. That being said,
I've been looking at various resources to try to understand the internals of
asyncio and it hasn't
Matthew Barnett added the comment:
That behaviour has nothing to do with re.
This line:
samples = filter(lambda sample: not pttn.match(sample), data)
creates a generator that, when evaluated, will use the value of 'pttn' _at that
time_.
However, you then bind 'pttn
Matthew Barnett added the comment:
Not a bug.
Argument 4 of re.sub is the count:
sub(pattern, repl, string, count=0, flags=0)
not the flags.
--
nosy: +mrabarnett
resolution: -> not a bug
stage: -> resolved
status: open -> closed
_
Matthew Suozzo added the comment:
> It just won't work unless you add explicit ".*" or ".*?" at the start of the
> pattern
But think of when regexes are used for validating input. Getting it to "just
work" may be over-permissive validation that o
Matthew Barnett added the comment:
Arguments are evaluated first and then the results are passed to the function.
That's true throughout the language.
In this instance, you can use \g<1> in the replacement string to refer to group
1:
re.sub(r'([a-z]+)', fr"\g<
New submission from Matthew Davis :
# Summary
I propose an additional unit test type for the unittest module.
TestCase.assertDuration(min=None, max=None), which is a context manager,
similar to assertRaises. It runs the code inside it, and then fails the test if
the duration of the code
Matthew Barnett added the comment:
The arguments are: re.sub(pattern, repl, string, count=0, flags=0).
Therefore:
re.sub("pattern","replace", txt, re.IGNORECASE | re.DOTALL)
is passing re.IGNORECASE | re.DOTALL as the count, not the flags.
It's in the document
Matthew Barnett added the comment:
The 4th argument of re.sub is 'count', not 'flags'.
re.IGNORECASE has the numeric value of 2, so:
re.sub(r'[aeiou]', '#', 'all is fair in love and war', re.IGNORECASE)
is equivalent to:
re.sub(r
Matthew Barnett added the comment:
I think what's happening is that in 'compiler_dict' (Python/compile.c), it's
checking whether 'elements' has reached a maximum (0x). However, it's not
doing this after incrementing; instead, it's checking before i
Matthew Davis added the comment:
The documentation says "you will have to clear the cached value"
What does that mean? How do I clear the cached value? Is there a flush function
somewhere? Do I `del` the attribute? Set the attribute to None?
The documentation as it stands toda
Matthew Hughes added the comment:
I've attached a patch containing tests showing the current behavior, namely
that exit_on_error does not change the behavior of
argparse.ArgumentParser.parse_args in either case:
* An unrecognized option is given
* A required option is not given
Shoul
Matthew Hughes added the comment:
> typo in the docs that it should have used enabled instead of enable
Well spotted, I'll happily fix this up.
> I guess the docs by manually mean that ArgumentError will be raised when
> exit_on_error is False that can be handled.
To be clear
New submission from Matthew Hughes :
>>> import argparse
>>> parser = argparse.ArgumentParser(exit_on_error=False)
>>> parser.parse_args(["--unknown"])
usage: [-h]
: error: unrecognized arguments: --unknown
The docs https://docs.python.o
Change by Matthew Hughes :
--
pull_requests: +20541
pull_request: https://github.com/python/cpython/pull/21393
___
Python tracker
<https://bugs.python.org/issue37
Matthew Hughes added the comment:
Whoops, I realise the patch I shared contained a combination of two
(independent) approaches I tried:
1. Add sleep and perform cleanup
2. Naively patch catch_threading_exception to accept a cleanup routine to be
run upon exiting but before removing the
Matthew Hughes added the comment:
I noticed this test was still emitting a "ResourceWarning":
--
$ ./python -m test test_ssl -m TestPostHandshakeAuth
0:00:00 load avg: 0.74 Run tests sequentially
0:00:00 load avg:
Change by Matthew Hughes :
--
pull_requests: +20431
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/21282
___
Python tracker
<https://bugs.python.org/issu
Matthew Hughes added the comment:
> Applications should not change this setting
> A read-only getter for the policy sounds like a good idea, though.
Thanks for the feedback, sounds reasonable to me. I'll happily work on getting
a PR up for the read-
1 - 100 of 828 matches
Mail list logo