Changes by Tim Peters :
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Tim Peters added the comment:
We should adhere to the json spec, but there's no harm (and some real good!) in
the docs pointing out notable cases where json and Python syntax differ.
--
nosy: +tim.peters
___
Python tracker
<http://bugs.py
Tim Peters added the comment:
Fil, here's the release schedule for Python 2.8:
http://www.python.org/dev/peps/pep-0404/
In short, 2.8 will never be released (well, never by us), and only bugfixes can
be applied to the 2.7 line. That's why 2.7 was removed. Regardless of its
me
Tim Peters added the comment:
I'd rather see `i.bits_at(pos, width=1)`, to act like
(i >> pos) & ((1 << width) - 1)
That is, extract the `width` consecutive bits at positions 2**pos through
2**(pos + width - 1) inclusive.
Because Python ints maintain the illusion of havi
Tim Peters added the comment:
Raymond, I expect they have overlapping - but not identical - audiences.
There's seem to be a quite capable bitarray extension here:
https://pypi.python.org/pypi/bitarray/
--
___
Python tracker
Tim Peters added the comment:
@serhiy, Mark certainly knows the proposed addition isn't _needed_ to pick
apart 64-bit integers. It's an issue there of clarity, not O() behavior. For
example, `i.bits_at(0, 52)` to get at a double's mantissa requires no thought
at all to w
Tim Peters added the comment:
[@anon]
> What should happen next?
1. Write docs.
2. Write a test suite and test a Python implementation.
3. Write C code, and reuse the test suite to test that.
4. Attach a patch for all of that to this issue (although a
Python implementation is no lon
Tim Peters added the comment:
The weakref.slice fix looks solid to me, although it appears to be specific to
2.7 (the methods are fancier on the current default branch, fiddling with
self._pending_removals too).
Does anyone know why the signature of pop is:
def pop(self, key, *args
Tim Peters added the comment:
I'm more puzzled by why `__hash__()` here bothers to call `hex()` at all. It's
faster to hash the underlying int as-is, and collisions in this specific
context would still be rare.
@Josh, note that there's nothing bad about getting sequentia
Tim Peters added the comment:
I would not call this a bug - it's just usually a silly thing to do ;-)
Note, e.g., that p{N} is shorthand for writing p N times. For example, p{4} is
much the same as (but not exactly so in all cases; e.g., if `p` happens to
contain a capturing group
Tim Peters added the comment:
>> (?<=a)(?<=a)(?<=a)(?<=a)
> There are four different points.
> If a1 before a2 and a2 before a3 and a3 before a4 and a4
> before something.
Sorry, that view doesn't make any sense. A successful lookbehind assertion
matches the e
Tim Peters added the comment:
BTW, note that the idea "successful lookaround assertions match an empty
string" isn't just a figure of speech: it's the literal truth, and - indeed -
is key to understanding what happens here. You can see this by adding some
capturi
Tim Peters added the comment:
One more useless ;-) data point, from Macsyma:
? acosh;
-- Function: acosh ()
- Hyperbolic Arc Cosine.
I don't like "area" - while accurate, nobody else uses it. Gratuitous novelty
is no virtue ;-) I like "inverse" better than &
Tim Peters added the comment:
A note from Guido, from about 2 years ago:
https://mail.python.org/pipermail/python-dev/2012-July/121127.html
"""
TBH, I think that adding nanosecond precision to the datetime type is
not unthinkable. You'll have to come up with some clever ba
Tim Peters added the comment:
Yup, it's definitely more than 8 bytes. In addition to the comments you
quoted, an in-memory datetime object also has a full Python object header, a
member to cache the hash code, and a byte devoted to saying whether or not a
tzinfo member is present.
Gue
Tim Peters added the comment:
Of course pickles come with overheads too - don't be tedious ;-) The point is
that the guts of the datetime pickling is this:
basestate = PyBytes_FromStringAndSize((char *)self->data,
_PyDateTime_DATETIME_DATASIZE
Tim Peters added the comment:
I'm afraid "microoptimizations" aren't worth measuring to begin with, since,
well, they're "micro" ;-) Seriously, switch compilers, compilation flags, or
move to a new release of a single compiler, and a micro-optimization ofte
Tim Peters added the comment:
I have no idea what was done to pickle for Python3, but this line works for me
to unpickle a Python2 protocol 2 datetime pickle under Python3, where P2 is the
Python2 pickle string:
pickle.loads(bytes(P2, encoding='latin1'), encoding='bytes
Tim Peters added the comment:
I'm sympathetic, but I don't see a good solution here without using
incompatible code.
ndiff was built to generate "the highest quality diff possible", for text
written and edited by humans, where "quality" is measured by hu
Tim Peters added the comment:
@eddygeek, I'd still call something so unintuitive "a bug" - it's hard to
believe this is the _intended_ way to get it to work. So I'd keep this open
until someone with better know
Tim Peters added the comment:
Was the title of this meant to be
"datetime.date() should accept a datetime.datetime as init parameter"
instead? That's what the example appears to be getting at.
If so, -1. Datetime objects already have .date(), .time(), and .timetz()
met
Tim Peters added the comment:
Alexander, I don't see a need to make everything a one-liner. Dealing with a
mix of dates and datetimes is easily sorted out with an `if` statement, like
def func(thedate):
if isinstance(thedate, datetime.datetime):
thedate = thedate.date()
Tim Peters added the comment:
+1. I agree it's a bug, that the diagnosis is correct, and that the patch will
fix it :-)
--
___
Python tracker
<http://bugs.python.org/is
Tim Peters added the comment:
I'm OK with -1, but I don't get that or -0.0 on 32-bit Windows Py 3.4.1:
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:38:22) [MSC v.1600 32 bit
(Intel)] on win32
Type "copyright", "credits" or "license()" for more i
Tim Peters added the comment:
To be clear, I agree -0.0 is "the correct" answer, and -1.0 is at best
defensible via a mostly-inappropriate limit argument. But in Py3 floor
division of floats returns an integer, and there is no integer -0. Nor, God
willing, will there ever be ;-)
Tim Peters added the comment:
Sorry, Mark - I took a true thing and careleslly turned it into a false thing
;-)
It's math.floor(a_float) that returns an int in Py3, not floor division of
floats. So, yup, no real problem with returning -0.0 after all; it's just that
it can't
Tim Peters added the comment:
This should remain closed. It's "a feature" that doctest demands exact textual
equality, and that the only way to override this is with one of the `#doctest:`
flags. "What you see is what you get - exactly" is one of doctest's
fun
Tim Peters added the comment:
Floor division on floats is an unattractive nuisance and should be removed,
period - so there ;-)
But short of that, I favor leaving it alone. Whose life would be improved by
changing it to return an int? Not mine - and doing so anyway is bound to break
Tim Peters added the comment:
@pitrou, I think usability is a lot more valuable than cross-feature "formal
consistency" here. I've been extracting bit fields for decades, and always
think of them in terms of "least-significant bit and number of bits". Perhaps
the
Tim Peters added the comment:
@anon, sorry, but we can't accept any code from you unless you have a real name
and fill out a contributor agreement:
http://www.python.org/psf/contrib/
This is legal crud, and I'm not a lawyer. But, in particular, lawyers have
told me that - in th
Tim Peters added the comment:
@HCT, see http://bugs.python.org/issue19915#msg205713 for what's "semantically
wrong". Ints are not arrays - slicing is unnatural.
The point about error checking is that if this were supported via slicing
notation, then the _helpful_ exceptions o
Tim Peters added the comment:
@anon, not to worry: someone else will write the code. Maybe even me ;-)
BTW, "public domain" is not a license. It's the absence of a license. Our
lawyers would not accept that even if you
Changes by Tim Peters :
--
stage: -> committed/rejected
___
Python tracker
<http://bugs.python.org/issue19964>
___
___
Python-bugs-list mailing list
Unsubscrib
Tim Peters added the comment:
It's working fine. `.search()` always finds the leftmost position at which the
pattern matches. In your example, the pattern '1?' does match at index 0: it
first tries to match `1' at index 0. That's the greedy part. The attempt
f
Changes by Tim Peters :
--
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue19964>
___
___
Python-bugs-list mailing list
Unsubscrib
Tim Peters added the comment:
It will always complete, but may take a very long time - this is one of many
ways to write a regexp that can't match requiring time exponential in the
length of the string. It's not a bug - it's the way Python's kind of regexp
engine
Tim Peters added the comment:
Nice to see you, Jurjen! Been a long time :-)
I'd like to see changes here too. It's unclear what "a lazy version" is
intended to mean, exactly, but I agree the actual behavior is surprising, and
that mpool.py is a lot less surprising in
Tim Peters added the comment:
Just for interest, I'll attach the worm-around I mentioned (imu.py). At this
level it's a very simple implementation, but now that I look at it, it's
actually a lazy implementation of imap() (or of an unimaginative ;-)
imap_unordered()).
--
Tim Peters added the comment:
@vajrasky, I didn't close it just because "the usual suspects" haven't chimed
in yet. That is, it's a pretty common kind of report, and these usually
attract the same kinds of comments pointing to other regexp implementations.
So
Tim Peters added the comment:
Closing this. Since nobody else "wants have a go" over two decades so far, no
point waiting for that ;-)
--
resolution: invalid -> wont fix
stage: -> committed/rejected
status: open -> closed
___
Py
Changes by Tim Peters :
--
priority: high -> normal
___
Python tracker
<http://bugs.python.org/issue8075>
___
___
Python-bugs-list mailing list
Unsubscrib
Tim Peters added the comment:
@Zach, "it would be nice" to know more about this. I tried your little program
on a desktop box, 32-bit Windows Vista, Python 3.3.2, but I boosted the loop
count to 10,000. So it ran well over an hour, with a wide variety of other
loads (from 0
Tim Peters added the comment:
Hmm. One obvious difference on my box:
Python 3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:03:43) [MSC v.1600 32 bit
(Intel)] on win32
>>> time.get_clock_info('monotonic')
namespace(adjustable=False, implementation='GetTickCount64()'
Tim Peters added the comment:
@haypo, I've read the PEP and it has great ideas. What I'm wondering is
whether they've been implemented "correctly" in the relevant cases on Windows
here. That Zach see a resolution of 0.0156001 on Windows isn't plausibly a
ques
Tim Peters added the comment:
FYI, this person seems to have made a career ;-) of making sense of the Windows
time functions:
http://stackoverflow.com/questions/7685762/windows-7-timing-functions-how-to-use-getsystemtimeadjustment-correctly
and their site:
http://www.windowstimestamp.com
Tim Peters added the comment:
@Liam, try using the "decimal" module instead. That follows rules much like
the ones people learn as kids.
>>> from decimal import Decimal as D
>>> D("0.1") * 3 # decimal results are computed exactly
Decimal('
Tim Peters added the comment:
I'm not sanguine about fixing any of this :-( The Microsoft docs are awful,
and the more web searches I do the more I realize that absolutely everyone is
confused, just taking their best guesses.
FYI, here are results from your new program on my 32-bit Vist
Tim Peters added the comment:
1. I'm sync'ing with north-america.pool.ntp.org. But the docs on my box say
"Your clock is typically updated once a week", and I believe it.
2. I just ran Zach's program again, with the same Python, and _this_ time
'time'
Tim Peters added the comment:
They certainly should _not_ be swapped, as explained clearly in the message
following the one you referenced. For the first half:
if self._lock.acquire(0):
succeeds if and only if the lock is not held by _any_ thread at the time. In
that case, the lock
Tim Peters added the comment:
I've haven't yet seen anyone complain about the inability to compare None
except in the specific context of sorting. If it is in fact specific to
sorting, then this specific symptom and "the problem" are in fact the same
thing ;-)
Tim Peters added the comment:
This is expected. "global" has only to do with the visibility of a name within
a module; it has nothing to do with visibility of mutations across processes.
On a Linux-y system, executing Pool(3) creates 3 child processes, each of which
sees a read-
Tim Peters added the comment:
Excellent idea! But then we should change bool(0.1) to be False too ;-)
--
nosy: +tim.peters
___
Python tracker
<http://bugs.python.org/issue20
Tim Peters added the comment:
[Nick]
> - deprecate aware time() entirely (raises the thorny question of what to
> return from .time() on an aware datetime() object)
aware_datetime_object.time() already returns a naive time object. The thorny
question is what .timetz() should return -
Tim Peters added the comment:
+inf == +inf, and -inf == -inf, are required by the 754 standard.
However, +inf - +inf, and -inf - -inf, are required (by the same
standard) to signal invalid operation and, if that signal is masked (as
it is in Python), to return a NaN. Then NaN == x is false
Tim Peters added the comment:
I wasn't keen to add the 2-argument log() extension either. However, I
bet it would help if the docs for that were changed to explain that
log(x, base) is just a convenient shorthand for computing
log(x)/log(base), and therefore may be a little less accurate
Tim Peters added the comment:
Yup, it's a good idea. In fact, storing info in the debug malloc blocks
to identify the API family used was part of "the plan", but got dropped
when time ran out.
serialno should not be abused for this purpose, though. On a 32-bit
box, a 24-bit r
Tim Peters added the comment:
Right, I /was/ hallucinating about serialno -- good catch.
Mysterious little integers still suck, though ;-) If you're going to
store it in a byte, then you can #define semi-meaningful letter codes
instead; e.g.,
#define _PYMALLOC_OBJECT_ID '
Tim Peters added the comment:
I understand you're annoyed, but the bug tracker is not the place to
rehash arguments that were settled a decade ago. If you need to pursue
this, please take it to the newsgroup comp.lang.python. Before you do,
you might want to scour the newsgroup's ar
Tim Peters added the comment:
FYI, mysterious numeric differences on PPC are often due to the C
compiler generated code to use the "fused multiply-add" HW instruction.
In which case, find a way to turn that off :-)
--
nosy: +tim_one
Tim Peters added the comment:
Adding
-mno-fused-madd
would be worth trying. It usually fixes PPC bugs ;-)
--
___
Python tracker
<http://bugs.python.org/issue3
Tim Peters added the comment:
Mark, you needn't bother: you found the smoking gun already! From your
description, I agree it would be very surprising if FMA made a
significant difference in the absence of catastrophic cancellation.
--
___
P
Tim Peters added the comment:
This behavior is intentional and is documented in the
datetime.isoformat() docs:
"""
Return a string representing the date and time in ISO 8601 format,
-MM-DDTHH:MM:SS.mm or, if microsecond is 0, -MM-DDTHH:MM:SS
...
"""
Tim Peters added the comment:
Ezio, it was Guido's design decision, it was intentional, and it's been
documented from the start(*). So you can disagree with it, but you
won't get anywhere claiming it's "a bug": intentional, documented
behaviors are never "
Tim Peters added the comment:
Terry, the language reference also says:
"""
For the purpose of shift and mask operations, a binary representation is
assumed, and negative numbers are represented in a variant of 2's
complement which gives the illusion of an infinite string of
Tim Peters added the comment:
Note that round() is implemented much more carefully in Python 3.x than in
Python 2.x, and 120 is actually the correct result under nearest/even rounding
(125 is exactly halfway between representable values when rounded to the
closest 10, and nearest/even
Tim Peters added the comment:
Showing once again that a proof of FP code correctness is about as compelling
as a proof of God's ontological status ;-)
Still, have to express surprised admiration for
4487665465554760717039532578546e-47! That one's not even close
Tim Peters added the comment:
You can use the comparison, provided you understand what it does, and that it
does NOT do what you hoped it would do. Here:
>>> 1.6 - 1.0
0.60009
That shows quite clearly that subtracting 1 from the binary approximation to
1.6 does
Tim Peters added the comment:
Mark, I agree that last one should be a release blocker -- it's truly dreadful.
BTW, did you guess in advance just how many bugs there could be in this kind of
code? I did ;-)
--
___
Python tracker
Tim Peters added the comment:
The GNU library's float<->string routines are based on David Gay's.
Therefore you can compare those to Gay's originals to see how much
effort was required to make them "mostly" portable, and can look at the
history of those to ge
Tim Peters added the comment:
Mark, "extreme complexity" is relative to what's possible if you don't
care about speed; e.g., if you use only bigint operations very
straightforwardly, correct rounding amounts to a dozen lines of
obviously
Tim Peters added the comment:
Is it worth it? To whom ;-) ? It was discussed several times before on
various Python mailing lists, and nobody was willing to sign up for the
considerable effort required (both to update Gay's code and to fight
with shifting platform quirks ever after).
If
Tim Peters added the comment:
Huh. I didn't see Preston volunteer to do anything here ;-)
One bit of software engineering for whoever does sign on: nothing kills
porting a language to a new platform faster than needing to get an
obscure but core subsystem working. So whatever is done
Tim Peters added the comment:
The CPython set/dict implementation does not guarantee "minimal constant
density", so "quite easy" doesn't apply in reality. For example, a set
that once contained a million elements may still contain a million
/slots/ for elements
Tim Peters added the comment:
The CPython set/dict implementation does not guarantee "minimal constant
density", so "quite easy" doesn't apply in reality. For example, a set
that once contained a million elements may still contain a million
/slots/ for elements
Changes by Tim Peters :
--
___
Python tracker
<http://bugs.python.org/issue1551113>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/m
Tim Peters added the comment:
> Out of interest, what does '%#.0f' % 1.5 produce on
> Python 2.7/Windows?
Microsoft's float->string routines have always done "add a half and
chop" rounding. So, yes, 1.5 rounds to 2 there.
> ...
> I suspect that we&
Tim Peters added the comment:
Do you realize that 2**54-1 isn't exactly representable as a float? It
requires 54 bits of precision, but the Python float format only has 53
bits available (on all popular boxes).
>>> 2**54-1
18014398509481983L
>>> float(_) # rounds t
Tim Peters added the comment:
Yup, -1 here too. For dyadic arithmetic operations (+ - * / % //) on
mixed numeric types, Python's execution model coerces the operands to a
common type before computation begins. Besides just being the way it's
worked "forever" in Python,
Tim Peters added the comment:
Terry asks:
> is it also guaranteed that quick_ratio() <= real_quick_ratio()
Nope! The docs don't say that, so it's not guaranteed.
It's not the _intent_ of the code that it be true, either. The only point to
quick_ratio() and real_q
Tim Peters added the comment:
> It would be great if you could shed
> some light on the history behind pure
> python implementation. Why was it
> developed in the first place?
It was rapid prototyping - design decisions were changing daily, and it goes a
lot faster to change Pyth
Tim Peters added the comment:
> What would be your opinion on adding
> datetime.py to the main python tree
> today?
The funny thing is I can't remember why we bothered creating the C version - I
would have been happiest leaving it all in Python.
Provided the test suite ensure
Tim Peters added the comment:
> I thought x was coming from integer
> arithmetics, but apparently datetime.py loves floats!
The arguments to __new__ can be floats, so it's necessary to deal with floats
there.
--
___
Python tra
Tim Peters added the comment:
> I thought x was coming from integer
> arithmetics, but apparently datetime.py loves floats!
The arguments to __new__ can be floats, so it's necessary to deal with floats
there.
--
___
Python tra
Changes by Tim Peters :
--
___
Python tracker
<http://bugs.python.org/issue7989>
___
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/m
Tim Peters added the comment:
> Do you remember why it was a good idea to
> derive datetime from date?
Why not? A datetime is a date, but with additional behavior. Makes
inheritance conceptually natural.
--
___
Python tracker
Tim Peters added the comment:
I'm not going to argue about whether datetime "should have been" subclassed
from date - fact is that it was, and since it was Guido's idea from the start,
he wouldn't change it now even if his time machin
Tim Peters added the comment:
> ...
> Another is tzinfo attribute of time. With time t,
> t.utcoffset() is kid of useless given that you
> cannot subtract it from t
Sure you can - but you have to write your code to do time arithmetic. The time
implementation does so under the cov
Tim Peters added the comment:
FYI, I like the change. As I recall it, the current wording was just to avoid
saying "ahead of UTC" or "behind UTC" (which was the original wording).
Technically pure or not, I never saw anyone get truly confused by "East of UTC"
Tim Peters added the comment:
I stopped understanding doctest the last time it was rewritten - it got far
more generalized than I ever intended already. It's up to the younger
generation to decide how much more inscrutable to make i
Tim Peters added the comment:
I also don't see a good reason to keep this open now - adds complication for no
quantifiable payoff.
--
___
Python tracker
<http://bugs.python.org/is
New submission from Tim Peters:
They already are.
>>> (-2)**0
1
You're probably doing this instead:
>>> -2**0
-1
Exponentiation has higher precedence than unary minus, so that last example
groups as -(2**0), and -1 is correct.
--
nosy: +tim.peters
resoluti
Tim Peters added the comment:
I can't judge a use case for a thread gimmick in the absence of wholly
specified examples. There are too many possible subtleties. Indeed, if I'd do
anything with Event.clear() it would be to remove it - I've seen too much code
that suffers s
Tim Peters added the comment:
Didn't anyone here follow the discussion about the `secrets` module? PHP was
crucified by security wonks for its horridly naive ways of initializing its
PRNGs:
https://media.blackhat.com/bh-us-12/Briefings/Argyros/BH_US_12_Argyros_PRNG_WP.pdf
Please don
Tim Peters added the comment:
Donald, it does matter. The code you found must be using some older version of
Python, because the Python 3 version of randint() uses _randbelow(), which is
an accept/reject method that consumes an _unpredictable_ number of 32-bit
Twister outputs. That utterly
Tim Peters added the comment:
Donald, your script appears to recreate the state from some hundreds of
consecutive outputs of getrandbits(64). Well, sure - but what of it? That
just requires inverting the MT's tempering permutation. You may as well note
that the state can be recreated
Tim Peters added the comment:
> Searching github pulls up a number of results of people
> calling it, but I haven't looked through them to see
> how/why they're calling it.
Sorry, I don't know what "it" refers to. Surely not to a program exposing the
output o
Tim Peters added the comment:
Ah! Yes, .getrandbits(N) outputs remain vulnerable to equation-solving in
Python 3, for any value of N. I haven't seen any code where that matters (may
be "a security hole"), but would bet some _could_ be found.
There's no claim of absolut
Tim Peters added the comment:
Raymond, while I'm in general agreement with you, note that urandom() doesn't
deliver "random" bytes to begin with. A CSPRNG is still a PRNG.
For example, if the underlying urandom() generator is ChaCha20, _it_ has "only"
512 bits
Tim Peters added the comment:
It was a primary purpose of `secrets` to be a place where security best
practices could be implemented, and changed over time, with no concern about
backward compatibility for people who don't use it.
So if `secrets` needs to supply a class with all the me
Tim Peters added the comment:
Christian, you should really be the first to vote to close this. The title of
this bug report is about whether it would be good to reduce the _number_ of
bytes Random initialization consumes from os.urandom(), not whether to stop
using os.urandom() entirely
1101 - 1200 of 1332 matches
Mail list logo