Tim Peters added the comment:
I think it's clear Guido would say "#1". The thrust of all his comments to
date is that it was a mistake to change the semantics of os.urandom() on Linux
(and one other platform? don't really care), and that in 3.6+ only `secrets`
should _try
Tim Peters added the comment:
Note that the very popular TI graphics calculators have had a distinct nth-root
function at least since the TI-83. It's a minor convenience there.
I'm +0 on adding it to Python's math module, which means not enough to do any
work ;-)
Note that if
Tim Peters added the comment:
Python's floats are emphatically not doing symbolic arithmetic - they use the
platform's binary floating point facilities, which can only represent a subset
of rationals exactly. All other values are approximated.
In particular, this shows the exact va
Tim Peters added the comment:
Note that the same is true in Python 2.
I don't want to document it, though. In `math.floor(44/4.4)`, the
subexpression `44/4.4` by itself wholly rules out that "[as if] with infinite
precision [throughout the larger expression]" may be in play.
Tim Peters added the comment:
Note: this started on stackoverflow:
https://stackoverflow.com/questions/38356584/python-multiprocessing-threading-code-exits-early
I may be missing something obvious, but the only explanation I could think of
for the behavior seen on Ubuntu is that the threads
Changes by Tim Peters :
--
components: +Library (Lib)
type: -> behavior
___
Python tracker
<http://bugs.python.org/issue27508>
___
___
Python-bugs-list mai
Tim Peters added the comment:
Curious: under Python 2.7.11 on Windows, the threads also terminate early
(they run "forever" - as intended - under 3.5.2).
--
___
Python tracker
<http://bugs.python.o
Tim Peters added the comment:
Ah - good catch! I'm closing this as a duplicate of bug18966. The real
mystery now is why the threads _don't_ terminate early under Windows 3.5.2 -
heh.
--
resolution: -> duplicate
status: open -> closed
superseder: -> Threads wit
Tim Peters added the comment:
This came up again today as bug 27508. In the absence of "fixing it", we
should add docs to multiprocessing explaining the high-level consequences of
skipping "normal" exit processing (BTW, I'm unclear on why it's skipped).
I
Tim Peters added the comment:
Devin, a primary point of `threading.py` is to provide a sane alternative to
the cross-platform thread mess. None of these reports are about making it
easier for threads to go away "by magic" when the process ends. It's the
contrary: the
Tim Peters added the comment:
About ""No parents, no children", that's fine so far as it goes. But Python
isn't C, a threading.Thread is not a POSIX thread, and threading.py _does_ have
a concept of "the main thread". There's no conceptual problem _
Tim Peters added the comment:
About: "The notion of categorically refusing to let a process end perhaps
overreaches in certain situations." threading.py addressed that all along: if
the programmer _wants_ the process to exit without waiting for a particular
threading.Thread, t
Tim Peters added the comment:
FYI, I'm seeing the same kind of odd truncation Steve sees - but it goes away
if I refresh the page.
--
___
Python tracker
<http://bugs.python.org/is
Tim Peters added the comment:
If you don't show us the regular expression, it's going to be darned hard to
guess what it is ;-)
--
nosy: +tim.peters
___
Python tracker
<http://bugs.python.o
Tim Peters added the comment:
Well, some backslash escapes are processed in the "replacement" argument to
`.sub()`. If your replacement text contains a substring of the form
\g
not immediately followed by
<
that will raise the exception you're seeing. The pars
Changes by Tim Peters :
--
stage: -> resolved
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue27586>
___
___
Python-bugs-list
Tim Peters added the comment:
Hmm. I'd test that tau is exactly equal to 2*pi. All Python platforms (past,
present, and plausible future ones) have binary C doubles, so the only
difference between pi and 2*pi _should_ be in the exponent (multiplication by 2
is exact). Else we screw
Tim Peters added the comment:
For those insisting that tau is somehow unnatural, just consider that the
volume of a sphere with radius r is 2*tau/3*r**3 - the formula using pi instead
is just plain impossible to remember ;-)
--
___
Python tracker
Tim Peters added the comment:
Serhiy's objection is a little subtler than that. The Python expression
`math.log(math.e)` in fact yields exactly 1.0, so IF it were the case that x**y
were implemented as
math.exp(math.log(x) * y)
THEN math.e**500 would be computed as math.exp(math.log(m
Tim Peters added the comment:
Note that "iterable" covers a world of things that may not support indexing
(let alone slicing). For example, it may be a generator, or a file open for
reading.
--
nosy: +tim.peters
___
Python trac
Changes by Tim Peters :
--
resolution: -> rejected
stage: -> resolved
___
Python tracker
<http://bugs.python.org/issue27751>
___
___
Python-bugs-list
Tim Peters added the comment:
A meta-note: one iteration of Newton's method generally, roughly speaking,
doubles the number of "good bits" in the initial approximation.
For floating n'th root, it would an astonishingly bad libm pow() that didn't
get more than half
Tim Peters added the comment:
Thanks, Mark! I had worked out the `floor_nroot` algorithm many years ago, but
missed the connection to the AM-GM inequality. As a result, instead of being
easy, proving correctness was a pain that stretched over pages. Delighted to
see how obvious it _can_ be
Tim Peters added the comment:
Noting that `floor_nroot` can be sped a lot by giving it a better starting
guess. In the context of `nroot`, the latter _could_ pass `int(x**(1/n))` as
an excellent starting guess. In the absence of any help, this version figures
that out on its own; an
Tim Peters added the comment:
Looks to me like this is what the docs are talking about when they say:
"""
As mentioned above, if a child process has put items on a queue (and it has not
used JoinableQueue.cancel_join_thread), then that process will not terminate
until all buff
Tim Peters added the comment:
Note that `Pool` grew `starmap()` and `starmap_async()` methods in Python 3.3
to (mostly) address this.
The signature difference from the old builtin `map()` remains regrettable.
Note that the `Pool` version differs from the `concurrent.futures` version of
`map
Tim Peters added the comment:
I don't care about correct rounding here, but it is, e.g., a bit embarrassing
that
>>> 64**(1/3)
3.9996
Which you may or may not see on your box, depending on your platform pow(), but
which you "should" see: 1/3 is no
Tim Peters added the comment:
Serhiy, I don't know what you're thinking there, and the code doesn't make much
sense to me. For example, consider n=2. Then m == n, so you accept the
initial `g = x**(1.0/n)` guess. But, as I said, there are cases where that
doesn't
Tim Peters added the comment:
Adding one more version of the last code, faster by cutting the number of extra
digits used, and by playing "the usual" low-level CPython speed tricks.
I don't claim it's always correctly rounded - although I haven't found a
specific c
Tim Peters added the comment:
Victor, happy to add comments, but only if there's sufficient interest in
actually using this. In the context of this issue report, it's really only
important that Mark understands it, and he already does ;-)
For example, it starts with float `**` beca
Tim Peters added the comment:
That's clever, Serhiy! Where did it come from? It's not Newton's method, but
it also appears to enjoy quadratic convergence.
As to speed, why are you asking? You should be able to time it, yes? On my
box, it's about 6 times slower than th
Tim Peters added the comment:
Steven, you certainly _can_ ;-) check first whether `r**n == x`, but can you
prove `r` is the best possible result when it's true? Offhand, I can't. I
question it because it rarely seems to _be_ true (in well less than 1% of the
random-ish test cas
Tim Peters added the comment:
As I said, the last code I posted is "fast enough" - I can't imagine a real
application can't live with being able to do "only" tens of thousands of roots
per second. A geometric mean is typically an output summary statistic,
Tim Peters added the comment:
Let's spell one of these out, to better understand why sticking to native
precision is inadequate. Here's one way to write the Newton step in "guess +
relatively_small_correction" form:
def plain(x, n):
g = x**(1.0/n)
ret
Tim Peters added the comment:
This is how it's supposed to work: Python's re matches at the leftmost
position possible, and _then_ matches the longest possible substring at that
position. When a regexp _can_ match 0 characters, it will match starting at
index 0. So, e.g.,
>
Tim Peters added the comment:
As I just clarified on the members list, the "Zen" is about the design of
Python-the-language. It's hard to imagine that a programming language _could_
be barbaric or rude, Perl notwithstanding ;-)
--
___
New submission from Tim Peters:
Each time thru, CWR searches for the rightmost position not containing the
maximum index. But this is wholly determined by what happened the last time
thru - search isn't really needed. Here's Python code:
def cwr2(iterable, r):
pool = tupl
Tim Peters added the comment:
Oops! Last part should read
"since the indices vector is non-decreasing, if indices[j] was n-2 then
indices[j-1] is also at most n-2"
That is, the instances of "r-2" in the original s
Tim Peters added the comment:
There's another savings to be had when an index becomes the maximum: in that
case, all the indices to its right are already at the maximum, so no need to
overwrite them. This isn't as big a savings as skipping the search, but still
buys about 10% m
Tim Peters added the comment:
Wildcard matching can easily be done in worst-case linear time, but not with
regexps. doctest.py's internal _ellipsis_match() shows one way to do it
(doctest can use "..." as a wildcard marker).
--
Tim Peters added the comment:
This is easy: Cowlishaw is wrong on this one, but nothing can be done about it
;-)
Confusion arises because most people think of 0**0 as a value (where it
certainly must be 1) while others seem to view it as some kind of shorthand for
expressing a limit (as the
Tim Peters added the comment:
> random() may return 1.0 exactly
That shouldn't be possible. Although the code does assume C doubles have at
least 53 bits of mantissa precision (in which case it does arithmetic that's
exact in at least 53 bits - cannot round up to 1.0; but _could
Tim Peters added the comment:
FYI, where x = 1.0 - 2.**-53, I believe it's easy to show this under IEEE
double precision arithmetic:
For every finite, normal, double y > 0.0,
IEEE_multiply(x, y) < y
under the default (nearest/even) rounding mode. That implies
int(x*
Tim Peters added the comment:
Steven, there's something wrong with the arithmetic on your machine, but I
can't guess what from here (perhaps you have a non-standard rounding mode
enabled, perhaps your CPU is broken, ...).
In binary, (2**53-1)/2**53
Tim Peters added the comment:
Thanks for the legwork, Steven!
So far it looks like a gcc bug when using -m32 (whether ints, longs and/or
pointers are 4 or 8 bytes _should_ make no difference to anything in Jason
Swails's C example).
But it may be a red herring anyway: there'
Tim Peters added the comment:
I'm guessing this is a "double rounding" problem due to gcc not restricting an
Intel FPU to using 53 bits of precison:
> In binary, (2**53-1)/2**53 * 2049 is:
>
> 0.1
> times
> 1
Tim Peters added the comment:
Should also note that double rounding cannot account for the _original_ symptom
here. Double rounding surprises on Intel chips require an exact product at
least 65 bits wide, but the OP's sequence is far too short to create such a
product. (Steven's
Tim Peters added the comment:
Mark, note that the sequence in the OP's original report only contains 35
elements. That, alas, makes "double rounding" irrelevant to this bug report.
That is, while random.choice() can suffer double-rounding surprises in _some_
cases, it can
Tim Peters added the comment:
Raymond, there are (at least) two bugs here:
1. The original bug report. Nobody yet has any plausible theory for what went
wrong there. So "won't fix" wouldn't be appropriate. If the OP can't provide
more information, neither a rep
Tim Peters added the comment:
Thanks, Mark! That's convincing. Just as a sanity check, I tried all ints in
1 through 4 billion (inclusive) against 1. - 2.**-52, with no failures.
Although that was with ad hoc Python code simulating various rounding methods
using scaled integers, s
Tim Peters added the comment:
I suppose the simplest "fix" would be to replace relevant instances of
int(random() * N)
with
min(int(random() * N), N-1)
That would be robust against many kinds of arithmetic quirks, and ensure that
platforms with and without such quirks would, if
Tim Peters added the comment:
Mark, closest I could find to a substantive SSE-vs-fsum report is here, but it
was closed (because the fsum tests were changed to ignore the problem ;-) ):
http://bugs.python.org/issue5593
--
___
Python tracker
<h
Tim Peters added the comment:
> It skews the distribution a tiny little bit, ...
But it doesn't - that's the point ;-)
If double-rounding doesn't occur at all (which appears to be the case on most
platforms), absolutely nothing changes (because min(int(random() * N), N-1) ==
Tim Peters added the comment:
Victor, if people want to use getrandbits(), we should backport the Python3
code, not reinvent it from scratch.
Note too Mark's comment: "There are several places in the source where
something of the form `int(i * random.random())` is used". Th
Tim Peters added the comment:
Victor, don't ask me, look at the code: the random.choice() implementations in
Python 2 and Python 3 have approximately nothing in common, and "the bug" here
should already be impossible in Python 3 (but I can't check that, because I
don
Tim Peters added the comment:
> Anyway, if we modify random.py, the generated
> numbers should be different, no?
Not in a bugfix release. The `min()` trick changes no results whatsoever on a
box that doesn't do double-rounding.
On a box that does do double-rounding, the only di
Tim Peters added the comment:
[Raymond]
> I can't say that I feel good about making everyone pay
> a price for a problem that almost no one ever has.
As far as I know, nobody has ever had the problem. But if we know a bug
exists, I think it's at best highly dubious to wait fo
Tim Peters added the comment:
I have a question about this new snippet in choice():
+if i == n and n > 0:
+i = n - 1
What's the purpose of the "and n > 0" clause? Without it, if i == n == 0 then
i will be set to -1, which is just as good as 0 for t
Tim Peters added the comment:
Hmm. Looks like the answer to my question came before, via "Custom collection
can has non-standard behavior with negative indices." Really? We're worried
about a custom collection that assigns some crazy-ass meaning to a negative
index appl
Tim Peters added the comment:
[Serhiy Storchaka]
> ... I want to say that double rounding causes not
> only bias from ideal distribution, but a difference
> between platforms
That's so, but not too surprising. On those platforms users will see
differences between "primitive
Tim Peters added the comment:
> The only reason for the restriction that
> I can think of is that some text representation
> of datetime only provide 4 digits for timezone.
There never was a compelling reason. It was simply intended to help catch
programming errors for a (at the ti
Tim Peters added the comment:
It is really bad that roundtripping current microsecond datetimes doesn't work.
About half of all microsecond-resolution datetimes fail to roundtrip correctly
now. While the limited precision of a C double guarantees roundtripping of
microsecond datetimes
Tim Peters added the comment:
> I wish we could use the same algorithm in
> datetime.utcfromtimestamp as we use in float
> to string conversion. This may allow the
> following chain of conversions to round trip in most cases:
>
> float literal -> float -> datetime ->
Tim Peters added the comment:
> Does your algorithm guarantee that any float that
> is displayed with 6 decimal places or less will
> convert to a datetime or timedelta with microseconds
> matching the fractional part?
No algorithm can, for datetimes far enough in the future (C
Tim Peters added the comment:
> >>> x = float.fromhex('0x1.38f312b1b36bdp-1')
> >>> x
> 0.6112295
> >>> round(x, 6)
> 0.611229
> >>> timedelta(0, x).microseconds
> 611230
>
> but I no longer remember whether we concluded th
Tim Peters added the comment:
> IMHO we should only modify the rounding method used by
> datetime.datetime.fromtimestamp() and
> datetime.datetime.utcfromtimestamp(), other functions
> use the "right" rounding method.
Fine by me. How about today? ;-)
The regression
Tim Peters added the comment:
Larry, I appreciate the vote of confidence, but I'm ill-equipped to help at the
patch level: I'm solely on Windows, and (long story) don't even have a C
compiler at the moment. The patch(es) are too broad and delicate to be sure of
without ki
Tim Peters added the comment:
That's great, Victor! Another person trying the code with their own critical
eyes would still be prudent. Two days ago you wrote:
> This part of Python (handling timestamps, especially
> the rounding mode) is complex, I prefer to check for
> all
Tim Peters added the comment:
FYI, that invariant failed for me just now under the released 3.4.3 too:
Python 3.4.3 (v3.4.3:9b73f1c3e601, Feb 24 2015, 22:44:40) [MSC v.1600 64 bit
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for mo
Tim Peters added the comment:
Alex, if you like, I'll take the blame for the rounding method - I did express
a preference for it here:
http://bugs.python.org/issue23517#msg249309
When I looked at the code earlier, the round-half-up implementation looked good
to me (floor(x+0.5) if x
Tim Peters added the comment:
Yes, it would be good to hear from Mark. When I first saw this report, I
checked to see whether he was on the nosy list. He is, but is apparently busy
elsewhere.
My opinions haven't changed: nearest/even is unnatural for rounding times
("so
Tim Peters added the comment:
Victor, there are good "theoretical" reasons for using half/even rounding in
_general_ use. But this bug report isn't the place to get into them. Here it
just should be enough to note that the IEEE 754 floating point standard
_requires_ half
Tim Peters added the comment:
Goodness. It's the properties of "randomly chosen decimals" that have nothing
to do with timestamps ;-) timestamps aren't random, so "statistical bias" is
roughly meaningless in this context. I gave a specific example before
Tim Peters added the comment:
Bah. It doesn't matter who's consuming the rounding of a binary float to
decimal microseconds: there are only 32 possible fractional parts where
nearest/even and half-up deliver different results. half-up preserves
properties of these specific i
Tim Peters added the comment:
> Half-up leaves them all 5 microseconds apart,
When only looking at the decimal digit in the 6th place after rounding.
Which is all I did look at ;-)
--
___
Python tracker
<http://bugs.python.org/issu
Tim Peters added the comment:
Victor, sorry if I muddied the waters here: Alex & I do agree nearest/even
must be used. It's documented for timedelta already, and the
seconds-from-the-epoch invariant Alex showed is at least as important to
preserve as round-tripping.
Alex, agreed
Tim Peters added the comment:
The longobject.c warnings are almost certainly unrelated to the test_re crash.
If shifting right twice (adding parens for clarity):
(LONG_MAX >> PyLong_SHIFT) >> PyLong_SHIFT.
squashes the warnings, that would be a substantially clearer way to
Tim Peters added the comment:
Universal consensus on ROUND_HALF_EVEN, yes.
I would like to see it repaired for 3.5.0, but that's just me saying so without
offering to do a single damned thing to make it happen ;-)
--
___
Python tracker
Tim Peters added the comment:
Guido, you're clearly talking with someone who knows too much ;-) If we're
using the Twister for _anything_ related to crypto-level randomness, then I'd
appalled - it was utterly unsuitable for any such purpose from day 1. But as a
general-purpos
Tim Peters added the comment:
Stare at footnote 2 for the Reference Manual's "Binary arithmetic operations"
section:
"""
[2] If x is very close to an exact integer multiple of y, it’s possible for
x//y to be one larger than (x-x%y)//y due to rounding. In such cas
Tim Peters added the comment:
> What is the rounding mode used by true division,
For binary floats? It inherits whatever the platform C's x/y double division
uses. Should be nearest/even on "almost all" platforms now, unless the user
fiddles with their FP
Tim Peters added the comment:
BTW, I find this very hard to understand:
"it’s possible for x//y to be one larger than" ...
This footnote was written long before "//" was defined for floats. IIRC, the
original version must have said something like:
"it's pos
Tim Peters added the comment:
The only way to be certain you're never going to face re-entrancy issues in the
future is to call malloc() directly - and hope nobody redefines that too with
some goofy macro ;-)
In the meantime, stick to PyMem_Malloc(). That's the intended way for cod
Tim Peters added the comment:
I expect Peter is correct: the C fromutc() doesn't match the logic of the
Python fromutc(), and there are no comments explaining why the C version
changed the logic.
The last 4 lines of his `time_issues.py` show the difference. The simplified
UKSumme
Tim Peters added the comment:
Patch looks good to me! Thanks :-)
--
___
Python tracker
<http://bugs.python.org/issue23600>
___
___
Python-bugs-list mailin
Tim Peters added the comment:
Afraid that's a question for python-dev - I lost track of the active branches
over year ago :-(
--
___
Python tracker
<http://bugs.python.org/is
Tim Peters added the comment:
Thank you for your persistence and patience, Peter! It shouldn't have been
this hard for you :-(
--
___
Python tracker
<http://bugs.python.org/is
Tim Peters added the comment:
You wholly consume the iterator after the first time you apply `list()` to it.
Therefore both `any()` and `all()` see an empty iterator, and return the
results appropriate for an empty sequence:
>>> multiples_of_6 = (not (i % 6) for i in range(1, 10))
Changes by Tim Peters :
--
nosy: +tim.peters
___
Python tracker
<http://bugs.python.org/issue24773>
___
___
Python-bugs-list mailing list
Unsubscribe:
Tim Peters added the comment:
What's your objection? Here's your original example:
>>> from bisect import *
>>> L = [1,2,3,3,3,4,5]
>>> x = 3
>>> i = bisect_left(L, x)
>>> i
2
>>> all(val < x for val in L[:i])
True
Tim Peters added the comment:
This is just hard to believe. The symptom you describe is exactly what's
expected if you got the new test suite but did not compile the new C code, both
added by the fix for:
http://bugs.python.org/issue23600
Since we have numerous buildbots on whic
Changes by Tim Peters :
--
components: +Library (Lib) -Extension Modules, ctypes
resolution: -> not a bug
stage: -> resolved
status: open -> closed
___
Python tracker
<http://bugs.python.or
Tim Peters added the comment:
Do note that this is not an "edit distance" (like Levenshtein) algorithm. It
works as documented instead ;-) , searching (in effect recursively) for the
leftmost longest contiguous matching blocks. Both "leftmost" and "contiguous"
Tim Peters added the comment:
BTW, the "leftmost longest contiguous" bit is messy to explain, so the main
part of the docs don't explain it all (it's of no interest to 99.9% of users).
Instead it's formally defined in the .find_longest_match() docs:
"&q
Tim Peters added the comment:
If it were treating doubles as floats, you'd get a lot more failures than this.
Many of these look like clear cases of treating _denormal_ doubles as 0.0,
though. I have no experience with ICC, but a quick Google search suggests ICC
flushes denormals to 0
Tim Peters added the comment:
Please read the responses to this older report:
http://bugs.python.org/issue25391
As they say, it's functioning as designed and documented, so this isn't "a
bug". For that reason I'm closing this as "not a bug".
As they also
Tim Peters added the comment:
I'd raise an exception when trying to insert into a bounded deque that's
already full. There's simply no way to guess what was _intended_; it's dead
easy for the user to implement what they _do_ intend (first make room by
deleting the s
Tim Peters added the comment:
My opinion doesn't change: I'd rather see an exception. I see no use case for
inserting "into the middle" of a full bounded queue. If I had one, it would
remain trivial to force the specific
Tim Peters added the comment:
+1 from me. Julian, you have the patience of a saint ;-)
--
___
Python tracker
<http://bugs.python.org/issue23601>
___
___
Pytho
Tim Peters added the comment:
If that's the actual code you're using, it has a bug: the "if k2[1] is None"
test is useless, since regardless of whether it's true or false, the next `if`
suite overwrites `retval`. You probably meant
elif k1[1] ...
^^
1201 - 1300 of 1332 matches
Mail list logo