New submission from Tim Hoffmann :
The sphinx conf.py entry `needs_version=1.8`
(https://github.com/python/cpython/blob/1470edd6131c29b8a09ce012cdfee3afa269d553/Doc/conf.py#L48)
is not in sync with the doc build requirements
(https://github.com/python/cpython/blob
Tim Peters added the comment:
Any principled change costs more than it's worth :-(
I'm mostly sympathetic with Guido's view, and have long advocated a new `imath`
module to hold the ever-growing number of functions that are really part of
integer combinatorics. But it&
Tim Peters added the comment:
Not a problem. Arguments to a function are evaluated before the function is
invoked. So in
self._finalizer = weakref.finalize(self, shutil.rmtree, self.name)
self.name is evaluated before weakref.finalize is called(). `self.name`
_extracts_ the `.name
Tim Peters added the comment:
I agree this doesn't occur in 3.10, but think Raymond pasted wrong outputs.
Here:
Python 3.10.0a4+ (heads/master:64fc105b2d, Jan 28 2021, 15:31:11)
[MSC v.1928 64 bit (AMD64)] on win32
>>> x = 0.6102683302836215
>>> y1 = 0.7906090004346
Tim Peters added the comment:
(len(moves) + 1) // 2
--
nosy: +tim.peters
___
Python tracker
<https://bugs.python.org/issue43255>
___
___
Python-bugs-list mailin
Tim Peters added the comment:
I'm very sorry for not keeping up with this - my health has been poor, and I
just haven't been able to make enough time.
Last time I looked to a non-trivial depth, I was quite happy, and just
quibbling about possible tradeoffs.
I can't honestly
Tim Peters added the comment:
New changeset 73a85c4e1da42db28e3de57c868d24a089b8d277 by Dennis Sweeney in
branch 'master':
bpo-41972: Use the two-way algorithm for string searching (GH-22904)
https://github.com/python/cpython/commit/73a85c4e1da42db28e3de57c868d24
New submission from Tim Magee :
Summary: I run a SimpleXMLRPCServer in pythonw. When I call an exposed function
the call appears to be made twice and the connection ends abnormally.
This is Python 3.8.3 Windows 64-bit, with the pywin32 additions, and under
Windows 7. Boo, hiss, I know -- I
Tim Magee added the comment:
Fragment more info. First a typo in the description, at the end of the MTR
"unexpected connection" should read "unexpected disconnection" of course
(no-one expects the unexpected connection ho ho).
Looking at calls to socket functions, Proc
Tim Magee added the comment:
After a peep at the source I theorised that using logRequests=False when
constructing the base class, SimpleXMLRPCServer. would see me right, and so it
did.
When logRequests is True control ultimately passes to
BaseHTTPRequestHandler.log_message which is a
Tim Peters added the comment:
This won't go anywhere without code (preferably minimal) we can run to
reproduce the complaint. If there were a "general principle" at work here,
someone else would surely have reported it over the last few decades ;-)
To the contrary, the comm
Tim Peters added the comment:
I agree with everyone ;-) That is, my _experience_ matches Mark's: as a
more-or-less "numeric expert", I use Fraction in cases where it's already fast
enough. Python isn't a CAS, and, e.g., in pure Python I'm not doing things like
Tim Peters added the comment:
Issue 21922 lists several concerns, and best I know they all still apply. As a
practical matter, I expect the vast bulk of core Python developers would reject
a change that sped large int basic arithmetic by a factor of a billion if it
slowed down basic
Tim Peters added the comment:
I see I never explicitly said +1, so I will now: +1 on merging this :-)
--
___
Python tracker
<https://bugs.python.org/issue29
Tim Peters added the comment:
`functools` is clearly a poor place for this. `imath` would also be.
`graph_stuff_probably_limited_to_a_topsort` is the only accurate name ;-)
Off-the-wall possibilities include `misclib` (stuff that just doesn't fit
anywhere else - yet) and `cslib` (Com
New submission from Tim Reid :
When an exception occurs within a tempfile.TemporaryDirectory() context
and the directory cleanup fails, the _cleanup exception_ is propagated,
not the original one. This effectively 'masks' the original exception,
and makes it impossible to catch usin
Tim Peters added the comment:
The repr truncates the pattern string, for display, if it's "too long". The
only visual clue about that, though, is that the display is missing the pattern
string's closing quote, as in the output you showed here. If you look at
url_pat.pat
Tim Peters added the comment:
Note that the relatively tiny pattern here extracts just a small piece of the
regexp in question. As the output shows, increase the length of a string it
fails to match by one character, and the time taken to fail approximately
doubles: exponential-time
Tim Peters added the comment:
Closing this as "Won't Fix", since the possibility of exponential-time behavior
with naively written nested quantifiers is well known, and there are no plans
to "do something" about that.
--
resolution: -> wont fix
stag
Tim Golden added the comment:
Thinking about testing here.. is there any straightforward way to cause
WaitForSingleObjectEx to fail?
The code change would be fairly slight and amenable to inspection, but it would
be good to actually test it
Change by Tim Golden :
--
assignee: -> tim.golden
___
Python tracker
<https://bugs.python.org/issue40913>
___
___
Python-bugs-list mailing list
Unsubscrib
Tim Golden added the comment:
Thanks, Eryk. I've had a couple of casts at this (and also with an eye to
https://bugs.python.org/issue40912 in a very similar area).
Trouble is I can't come up with a way of adding a set.. function which doesn't
seem wholly artificial,
Change by Tim Golden :
--
assignee: -> tim.golden
___
Python tracker
<https://bugs.python.org/issue40912>
___
___
Python-bugs-list mailing list
Unsubscrib
New submission from Tim Hoffmann :
Path.home() may fail un
(https://github.com/matplotlib/matplotlib/issues/17707#issuecomment-647180252).
1. I think the raised key error is too low-level, and it should be something
else; what exactly t.b.d.
2. The documentation
(https://docs.python.org/3
Tim Peters added the comment:
Read the PEP Serhiy already linked to:
https://www.python.org/dev/peps/pep-0238/
This was a deliberate change to how "integer / integer" works, introduced with
Python 3.
--
nosy: +tim.peters
status: open
Tim Peters added the comment:
Mike, read that exchange again. You originally wrote
"print(2 / 2) gives 2.0 instead of 2"
but you didn't _mean_ that. You meant to say it "gives 1.0 instead of 1", or
you meant something other than "2 / 2"). In Python 3,
Tim Peters added the comment:
I don't see real value in the docs noting that Bad Things can happen if code
lies about true refcounts. If a container points to an object, _of course_ the
container should own that reference. Cheating on that isn't intended to be
supported in a
Tim Peters added the comment:
For the first, your hardware's binary floating-point has no concept of
significant trailing zeroes. If you need such a thing, use Python's `decimal`
module instead, which does support a "significant trailing zero" concept. You
would need
Tim Peters added the comment:
I assumed Mark would tell us what's up with the arange() oddity, so let's see
whether he does. There is no truly good way to generate "evenly spaced" binary
floats using a non-representable conceptual decimal delta. The dumbass ;-)
Tim Peters added the comment:
Cool! So the only thing surprising to me here is just how far off balance the
arange() run was. So I'd like to keep this open long enough for Mark to
notice, just in case it's pointing to something fish
Tim Peters added the comment:
Thanks, Mark! I didn't even know __round__ had become a dunder method.
For the rest, I'll follow StackOverflow - I don't have an instant answer, and
the instant answers I _had_ didn't sur
Tim Peters added the comment:
Huh! I thought everyone in Standards World gave up by now, and agreed 0**0
should be 1.
--
nosy: +tim.peters
___
Python tracker
<https://bugs.python.org/issue41
Tim Peters added the comment:
Gregory, care to take their code and time it against Python?
I'm not inclined to: reading the comments in the code, they're trying "fast
paths" already described in papers by Clinger and - later - by Gay. When those
fast paths don't
Tim Peters added the comment:
Pro: focus on the "iterable" part of the title. If you want to, e.g., select 3
lines "at random" out of a multi-million-line text file, this kind of reservoir
sampling allows to do that holding no more than one line in memory
simultaneous
Tim Peters added the comment:
Thanks! That explanation really helps explain where "geometric distribution"
comes from. Although why it keeps taking k'th roots remains a mystery to me ;-)
Speaking of which, the two instances of
exp(log(random())/k)
are numerically suspect. Be
Tim Peters added the comment:
Julia's randsubseq() doesn't allow to specify the _size_ of the output desired.
It picks each input element independently with probability p, and the output
can be of any size from 0 through the input's size (with mean output length
p*length
Tim Peters added the comment:
The lack of exactness (and possibility of platform-dependent results,
including, e.g., when a single platform changes its math libraries) certainly
works against it.
But I think Raymond is more bothered by that there's no apparently _compelling_
use cas
New submission from Tim Z :
It refuses to go full screen when I rotate screen 90° on mac
--
components: macOS
messages: 374014
nosy: Tim Z, ned.deily, ronaldoussoren
priority: normal
severity: normal
status: open
title: idle not going full screen when I rotate screen 90° on mac
type
Tim Z added the comment:
I have 2nd screen that rotate to portrait. Is there a way to launch "import
tkinter; tkinter.Tk()" at idle start? Or I better wait the next update?
It's not related but...
...why idle3.8 don't have cut/copy/paste on right click? (I mean the paste
Tim Z added the comment:
It works even after restart. I thought I had to run it after each restart I did
it in idle only once.
So no need to do it again in python
--
___
Python tracker
<https://bugs.python.org/issue41
Tim Peters added the comment:
I see no evidence of a bug here. To the contrary, the output proves that
__del__ methods are getting called all along. And if garbage weren't being
collected, after allocating a million objects each with its own megabyte string
object, memory use at th
Tim Peters added the comment:
What makes you think that? Your own output shows that the number of "Active"
objects does NOT monotonically increase across output lines. It goes up
sometimes, and down sometimes. Whether it goes up or down is entirely due to
accidents of when your
Tim Z added the comment:
idle shell window
https://imgur.com/zuyuOaS
--
___
Python tracker
<https://bugs.python.org/issue41349>
___
___
Python-bugs-list mailin
Tim Peters added the comment:
It's impossible for any implementation to know that cyclic trash _is_ trash
without, in some way, traversing the object graph. This is expensive, so
CPython (or any other language) does not incur that expense after every single
decref that leaves a non
Tim Peters added the comment:
Well, this isn't a help desk ;-) You may want instead to detail your problem
on, say, StackOverflow, or the general Python mailing list.
Please note that I don't know what your "problem" _is_: you haven't said. You
posted some numbers
Tim Peters added the comment:
I'm inclined to ignore this. No actual user has complained about this, and I
doubt any ever will: it's got to be rare as hen's teeth to use a parameter
outside of, say, [0.1, 10.0], in real life. The division error can't happen for
thos
Tim Peters added the comment:
BTW, if we have to "do something", how about changing
return 1.0 / u ** (1.0/alpha)
to the mathematically equivalent
return (1.0 / u) ** (1.0/alpha)
? Not sure about Linux-y boxes, but on Windows that would raise OverflowError
instead of ZeroDiv
Tim Peters added the comment:
I'm not clear on that the alias method is valuable here. Because of the
preprocessing expense, it cries out for a class instead: an object that can
retain the preprocessed tables, be saved to disk (pickled), restored later, and
used repeatedly to mak
Tim Peters added the comment:
Oh yes - I understood the intent of the code. It's as good an approach to
living with floating-point slop in this context as I've seen. It's not
obviously broken. But neither is it obviously correct, and after a few minutes
I didn't
Tim Peters added the comment:
That text is fine, if you feel something needs to be said at all. I really
don't. A Pareto distribution of this kind with parameter <= 1.0 has infinite
expected value - VERY long tail. Anyone who knows what they're doing already
knows that. T
Tim Peters added the comment:
I'm skeptical of the need for - and wisdom of - this. Where does it come up? I
can't think of any context where this would have been useful, or of any other
language or package that does something like this. Long chains of mults are
unusual outside
Tim Peters added the comment:
See "wisdom" earlier ;-) It's ad hoc trickery that seemingly can't be explained
without showing the precise implementation in use today. As already mentioned,
frexp() trickery _can_ be explained: exactly what you'd get if left-to-righ
Tim Peters added the comment:
Cool! So looks like you could also address an accuracy (not out-of-range)
thing the frexp() method also does as well as possible: loosen the definition
of "underflow" to include losing bits to subnormal products. For example, with
the inputs
>&g
Tim Peters added the comment:
I may well have misread the code, believing it can still allow spurious
over/underflows. On second reading of the current file, I don't know - it's
more complicated than I thought.
If it does guarantee to prevent them, then I shift from -1 to (pro
Tim Peters added the comment:
"Denormal" and "subnormal" mean the same thing. The former is probably still in
more common use, but all the relevant standards moved to "subnormal" some years
ago.
Long chains of floating mults can lose precision too, but hardly
Tim Peters added the comment:
Well, that can't work: the most likely result for a long input is 0.0 (try
it!). frexp() forces the mantissa into range [0.5, 1.0). Multiply N of those,
and the result _can_ be as small as 2**-N. So, as in Mark's code, every
thousand times (2
Tim Peters added the comment:
More extensive testing convinces me that pairing multiplication is no real help
at all - the error distributions appear statistically indistinguishable from
left-to-right multiplication.
I believe this has to do with the "condition numbers" of fp ad
Tim Peters added the comment:
Or, like I did, they succumbed to an untested "seemingly plausible" illusion ;-)
I generated 1,000 random vectors (in [0.0, 10.0)) of length 100, and for each
generated 10,000 permutations. So that's 10 million 100-element products
overall.
Tim Peters added the comment:
Cute: for any number of arguments, try computing h**2, then one at a time
subtract a**2 (an argument squared) in descending order of magnitude. Call
that (h**2 - a1**2 - a2**2 - ...) x.
Then
h -= x/(2*h)
That should reduce errors too, although not nearly
Tim Peters added the comment:
> ...
> one at a time subtract a**2 (an argument squared) in descending
> order of magnitude
> ...
But that doesn't really help unless the sum of squares was computed without
care to begin with. Can do as well by skipping that but instead comput
Tim Peters added the comment:
Oh no - I wouldn't use this as a default implementation. Too expensive. There
is one aspect you may find especially attractive, though: unlike even the
Decimal approach, it should be 100% insensitive to argument order (no info is
lost before fsum() is c
Tim Peters added the comment:
I suspect you're reading some specific technical meaning into the word "block"
that the PR and release note didn't intend by their informal use of the word.
But I'm unclear on what technical meaning you have in mind.
Before the change
Tim Peters added the comment:
About speed, the fsum() version I posted ran about twice as fast as the
all-Decimal approach, but the add_on() version runs a little slower than
all-Decimal. I assume that's because fsum() is coded in C while the add_on()
prototype makes mounds of addit
Tim Peters added the comment:
Here's a "correct rounding" fail for the add_on approach:
xs = [16.004] * 9
decimal result = 48.01065814103642
which rounds to float 48.014
add_on result: 48.01
That's about 0.500
Tim Peters added the comment:
> That's about 0.50026 ulp too small - shocking ;-)
Actually, that's an illusion due to the limited precision of Decimal. The
rounded result is exactly 1/2 ulp away from the infinitely precise result -
it's a nearest/even tie case.
Tim Peters added the comment:
Here's an amusing cautionary tale: when looking at correct-rounding failures
for the fsum approach, I was baffled until I realized it was actually the
_decimal_ method that was failing. Simplest example I have is 9 instances of
b=4.999, which
Tim Peters added the comment:
There's no evidence of a Python issue here, so I recommend closing this. It's
not the Python bug tracker's job to try to make sense of platform-specific
reporting tools, which, as already explained, can display exceedingly confusing
numbers.
Tim Peters added the comment:
Just FYI, if the "differential correction" step seems obscure to anyone, here's
some insight, following a chain of mathematical equivalent respellings:
result + x / (2 * result) =
result + (sumsq - result**2) / (2 * result) =
result + (su
Tim Peters added the comment:
My apologies if nobody cares about this ;-) I just think it's helpful if we all
understand what really helps here.
> Pretty much the whole trick relies on computing
> "sumsq - result**2" to greater than basic machine
> precision.
But
Tim Peters added the comment:
> won't have a chance to work through it for a week or so
These have been much more in the way of FYI glosses. There's no "suggestion"
here to be pursued - just trying to get a deeper understanding of code already
written :-)
While I ca
Tim Peters added the comment:
Do it! It's elegant and practical :-)
--
___
Python tracker
<https://bugs.python.org/issue41513>
___
___
Python-bugs-list m
Tim Peters added the comment:
One more implication: since the quality of the initial square root doesn't
really much matter, instead of
result = sqrt(to_float(parts))
a, b = split(result)
parts = add_on(-a*a, parts)
parts = add_on(-2.0*a*b, parts)
parts = add_on
New submission from Tim Peters :
This started on StackOverflow:
https://stackoverflow.com/questions/63623651/how-to-properly-share-manager-dict-between-processes
Here's a simpler program.
Short course: an object of a subclass of mp.Process has an attribute of
seemingly any type obt
Tim Peters added the comment:
Weird. If I insert these between the two process starts:
import time
time.sleep(2)
then the producer produces the expected output:
at start: 666
at producer start: 666
and the program blows up instead when it gets to
print("in con
Tim Peters added the comment:
And more weirdness, changing the tail to:
for i in range(10):
state_value.value = i
state_ready.clear()
producerprocess = MyProducer(state_value, state_ready)
consumerprocess = MyConsumer(state_value, state_ready
Tim Peters added the comment:
Noting that adding a `.join()` to the failing code on the StackOverflow report
appeared to fix that problem too.
In hindsight, I guess I'm only mildly surprised that letting the main process
run full speed into interpreter shutdown code while worker proc
Tim Peters added the comment:
About test_frac.py, I changed the main loop like so:
got = [float(expected)] # NEW
for hypot in hypots:
actual = hypot(*coords)
got.append(float(actual)) # NEW
err = (actual - expected
Tim Peters added the comment:
Closing, since it remains a unique report and there hasn't been another word
about it in over a year.
--
resolution: -> works for me
stage: -> resolved
status: pending -> closed
___
Python t
Tim Peters added the comment:
The docs are already clear about that you play with `setrecursionlimit()` at
your own risk:
"""
Set the maximum depth of the Python interpreter stack to limit. This limit
prevents infinite recursion from causing an overflow of the C stack and
Tim Peters added the comment:
There is no way in portable ANSI C to deduce a "safe" limit. The limits that
exist were picked by hand across platforms, to be conservative guesses at what
would "never" break.
You're allowed to increase the limit if you think you know
Tim Peters added the comment:
Right, generators played no essential role here. Just one way of piling up a
tall tower of C stack frames.
Search the web for "stackless Python" for the history of attempts to divorce
the CPython implementation from the platform C stack.
There a
Tim Peters added the comment:
"Stackless" is a large topic with a convoluted history. Do the web search. In
short, no, it will never go in the core - too disruptive to too many things.
Parts have lived on in other ways, watered down versions. The PyPy project
captured most of wh
Tim Peters added the comment:
I believe your testing code is in error, perhaps because it's so overly
elaborate you've lost track of what it's doing. Here's a straightforward test
program:
import difflib
s1='http://local:56067/register/200930162135700"
Tim Peters added the comment:
Also reproduced on 64-bit Win10 with just-released 3.9.0.
Note that string search tries to incorporate a number of tricks (pieces of
Boyer-Moore, Sunday, etc) to speed searches. The "skip tables" here are
probably computing a 0 by mistake. The
Tim Peters added the comment:
Good sleuthing, Dennis! Yes, Fredrik was not willing to add "potentially
expensive" (in time or in space) tricks:
http://effbot.org/zone/stringlib.htm
So worst-case time is proportional to the product of the arguments' lengths,
and the cases
Tim Peters added the comment:
Just FYI, the original test program did get the right answer for the second
search on my box - after about 3 1/2 hours :-)
--
___
Python tracker
<https://bugs.python.org/issue41
Tim Peters added the comment:
BTW, this initialization in the FASTSEARCH code appears to me to be a mistake:
skip = mlast - 1;
That's "mistake" in the sense of "not quite what was intended, and so
confusing", not in the sense of "leads to a wrong result
Tim Peters added the comment:
The attached fastsearch.diff removes the speed hit in the original test case
and in the constructed one.
I don't know whether it "should be" applied, and really can't make time to dig
into it.
The rationale: when the last characters of th
Tim Peters added the comment:
Ya, the case for the diff is at best marginal. Note that while it may be
theoretically provable that the extra test would make the worst cases slower,
that's almost certainly not measurable. The extra test would almost never be
executed in the worst case
Tim Peters added the comment:
Impressive, Dennis! Nice work.
FYI, on the OP's original test data, your fastsearch() completes each search in
under 20 seconds using CPython, and in well under 1 second using PyPy.
Unfortunately, that's so promising it can't just be dismis
Tim Peters added the comment:
> For a bunch of cases it's slower, for some others it's faster.
I have scant real idea what you're doing, but in the output you showed 4 output
lines are labelled "slower" but 18 are labelled "faster".
What you wrote just
Tim Peters added the comment:
Dennis, would it be possible to isolate some of the cases with more extreme
results and run them repeatedly under the same timing framework, as a test of
how trustworthy the _framework_ is? From decades of bitter experience, most
benchmarking efforts end up
Tim Peters added the comment:
Dennis, I'm delighted that the timing harness pointed out an actual glitch, and
that it was so (seemingly) straightforward to identify the algorithmic cause.
This gives me increased confidence that this project can be pushed to adoption,
and your name wi
Tim Peters added the comment:
> There's no discomfort at all to me if, e.g., it stored
> 32-bit counts and is indexed by the last 6 bits of the
> character. That's a measly 256 bytes in all.
Or, for the same space, 16-bit counts indexed by the last 7 bits. Then there'
Tim Peters added the comment:
For completeness, a link to the python-dev thread about this:
https://mail.python.org/archives/list/python-...@python.org/thread/ECAZN35JCEE67ZVYHALRXDP4FILGR53Y/#4IEDAS5QAHF53IV5G3MRWPQAYBIOCWJ5
--
___
Python tracker
Tim Peters added the comment:
I don't think Rabin-Karp is worth trying here. The PR is provably worst-case
linear time, and the constant factor is already reasonably small. Its only real
weakness I can see is that it can be significantly (but seemingly not
dramatically) slower tha
Tim Peters added the comment:
Removed 3.8 from the Versions list. The code was functioning as designed, and
the O(n*m) worst case bounds were always known to be possible, so there's no
actual bug here.
--
versions: -Python 3.8
___
Python tr
Tim Peters added the comment:
And changed the "Type" field to "performance", because speed is the only issue
here.
--
type: behavior -> performance
___
Python tracker
<https://
Tim Peters added the comment:
BTW, in the old post of Fredrick's I linked to, he referred to a
"stringbench.py" program he used for timings, but the link is dead.
I was surprised to find that it still lives on, under the Tools directory. Or
did - I'm on Windows now a
Tim Peters added the comment:
When I ran stringbench yesterday (or the day before - don't remember), almost
all the benefit seemed to come from the "late match, 100 characters" tests.
Seems similar for your run. Here are your results for just that batch,
interleaving the t
701 - 800 of 2346 matches
Mail list logo