Re: [Python-Dev] == on object tests identity in 3.x
07.07.2014 18:11, Andreas Maier wrote:
Am 07.07.2014 17:58, schrieb Xavier Morel:
On 2014-07-07, at 13:22 , Andreas Maier
wrote:
While discussing Python issue #12067
(http://bugs.python.org/issue12067#msg222442), I learned that Python
3.4 implements '==' and '!=' on the object type such that if no
special equality test operations are implemented in derived classes,
there is a default implementation that tests for identity (as opposed
to equality of the values).
[...]
IMHO, that default implementation contradicts the definition that
'==' and '!=' test for equality of the values of an object.
[...]
To me, a sensible default implementation for == on object would be
(in Python):
if v is w:
return True;
elif type(v) != type(w):
return False
else:
raise ValueError("Equality cannot be determined in default
implementation")
Why would comparing two objects of different types return False
Because I think (but I'm not sure) that the type should play a role
for comparison of values. But maybe that does not embrace duck typing
sufficiently, and the type should be ignored by default for comparing
object values.
but comparing two objects of the same type raise an error?
That I'm sure of: Because the default implementation (after having
exhausted all possibilities of calling __eq__ and friends) has no way
to find out whether the values(!!) of the objects are equal.
IMHO, in Python context, "value" is a very vague term. Quite often we
can read it as the very basic (but not the only one) notion of "what
makes objects being equal or not" -- and then saying that "objects are
compared by value" is a tautology.
In other words, what object's "value" is -- is dependent on its nature:
e.g. the value of a list is what are the values of its consecutive
(indexed) items; the value of a set is based on values of all its
elements without notion of order or repetition; the value of a number is
a set of its abstract mathematical properties that determine what makes
objects being equal, greater, lesser, how particular arithmetic
operations work etc...
I think, there is no universal notion of "the value of a Python
object". The notion of identity seems to be most generic (every object
has it, event if it does not have any other property) -- and that's why
by default it is used to define the most basic feature of object's
*value*, i.e. "what makes objects being equal or not" (== and !=).
Another possibility would be to raise TypeError but, as Ethan Furman
wrote, it would be impractical (e.g. key-type-heterogenic dicts or sets
would be practically impossible to work with). On the other hand, the
notion of sorting order (< > <= >=) is a much more specialized object
property.
Cheers.
*j
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Python docs about comparisons vs. CPython reality
Hello, Are they bugs in the Python docs or just some CPython implementation details that are purposely not documented? (but then, again, some of the docs seem to be at least not precise...): In https://docs.python.org/3.4/reference/datamodel.html#object.__eq__ there is the statement: > There are no implied relationships among the comparison operators. > The truth of x==y does not imply that x!=y is false. Accordingly, > when defining __eq__(), one should also define __ne__() so that the > operators will behave as expected. On the other hand, in https://docs.python.org/3.4/library/stdtypes.html#comparisons we read: > (in general, __lt__() and __eq__() are sufficient, if you want the > conventional meanings of the comparison operators) And, when I try the __eq__() stuff in CPython it seems that, indeed, the language provides a proper __ne__() implementation for me automatically (without need to implement __ne__() explicitly by myself): Python 3.4.0 (default, Mar 20 2014, 01:28:00) [...] >>> class A: ... def __eq__(self, other): ... if hasattr(self, 'x') and hasattr(other, 'x'): ... return self.x == other.x ... return NotImplemented ... >>> A() == A() False >>> A() != A() True >>> a = A() >>> a.x = 1 >>> a1 = A() >>> a1.x = 1 >>> a2 = A() >>> a2.x = 2 >>> a == a1 True >>> a != a1 False >>> a1 == a1 True >>> a1 != a1 False >>> a1 == a2 False >>> a1 != a2 True Is it a language guarantee (then, I believe, it should be documented) or just an implementation accident? (then, I believe, it still could be documented as a CPython implementation detail). See also the Python equivalent of the SimpleNamespace class (without __ne__() implemented explicitly): https://docs.python.org/3/library/types.html#types.SimpleNamespace On the other hand, the "__lt__() and __eq__() are sufficient" statement seems not to be true: >>> a < a1 False >>> a <= a1 Traceback (most recent call last): File "", line 1, in TypeError: unorderable types: A() <= A() >>> a > a1 False >>> a >= a1 Traceback (most recent call last): File "", line 1, in TypeError: unorderable types: A() >= A() >>> a1 < a2 True >>> a1 <= a2 Traceback (most recent call last): File "", line 1, in TypeError: unorderable types: A() <= A() >>> a1 > a2 False >>> a1 >= a2 Traceback (most recent call last): File "", line 1, in TypeError: unorderable types: A() >= A() On yet another hand, adding __le__() to that class seems to be perfectly sufficient (without adding __gt__() and __ge__()): >>> def le(self, other): ... if hasattr(self, 'x') and hasattr(other, 'x'): ... return self.x <= other.x ... return NotImplemented ... >>> A.__le__ = le >>> a < a1 False >>> a <= a1 True >>> a > a1 False >>> a >= a1 True >>> a1 < a2 True >>> a1 <= a2 True >>> a1 > a2 False >>> a1 >= a2 False What of all this stuff is a language guarantee and what is just an implementation accident? Shouldn't it be documented more accurately? Cheers. *j ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] (#19562) Asserts in Python stdlib code (datetime.py)
17.11.2013 23:05, Guido van Rossum wrote: The correct rule should be "don't use assert (the statement) to check for valid user input" and the stated reason should be that the assert statement was *designed* to be disabled globally, not to be a shorthand for "if not X: raise (mumble) Y". A corollary should also be that unittests should not use the assert statement; some frameworks sadly encourage the anti-pattern of using it in tests. My problem with -O (and -OO) is that even though my code is valid (in terms of the rule 'use assert only for should-never-happen cases') I have no control over 3rd party library code: I can never know whether doesn't it break if I turn -O or -OO on (as long as I do not analyze carefully the code of the libraries I use, including writing regression tests [for 3rd party code]...). Woudln't it to be useful to add possibility to place an "optimisation cookie" (syntactically analogous to "encoding cookie") at the beginning of each of my source files (when I know they are "-O"-safe), e.g.: # -*- opt: asserts -*- or even combined with an encoding cookie: # -*- coding: utf-8; opt: asserts, docstrings -*- Then: * The -O flag would be effectively applied *only* to a file containing such a cookie and *exactly* according to the cookie content (whether asserts, whether docstrings...). * Running without -O/-OO would mean ignoring optimisation cookies. * The -OO flag would mean removing both asserts and docstrings (i.e. the status quo of -OO). * Fine-grained explicit command line flags such as --remove-asserts and --remove-docstings could also be useful. (Of course, the '-*-' fragments in the above examples are purely conventional; the actual regex would not include them as it does not include them now for encoding cookies.) Cheers. *j ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 461 - Adding % and {} formatting to bytes
16.01.2014 17:33, Michael Urman wrote:
On Thu, Jan 16, 2014 at 8:45 AM, Brett Cannon
wrote:
Fine, if you're worried about bytes.format() overstepping by
implicitly
calling str.encode() on the return value of __format__() then you
will need
__bytes__format__() to get equivalent support.
Could we just re-use PEP-3101's note (easily updated for Python 3):
Note for Python 2.x: The 'format_spec' argument will be either
a string object or a unicode object, depending on the type of the
original format string. The __format__ method should test the
type
of the specifiers parameter to determine whether to return a
string or
unicode object. It is the responsibility of the __format__
method
to return an object of the proper type.
If __format__ receives a format_spec of type bytes, it should return
bytes. For such cases on objects that cannot support bytes (i.e. for
str), it can raise. This appears to avoid the need for additional
methods. (As does Nick's proposal of leaving it out for now.)
-1.
I'd treat the format()+.__format__()+str.format()-"ecosystem" as
a nice text-data-oriented, *complete* Py3k feature, backported to
Python 2 to share the benefits of the feature with it as well as
to make the 2-to-3 transition a bit easier.
IMHO, the PEP-3101's note cited above just describes a workaround
over the flaws of the Py2's obsolete text model. Moving such
complications into Py3k would make the feature (and especially the
ability to implement your own .__format__()) harder to understand
and make use of -- for little profit.
Such a move is not needed for compatibility. And, IMHO, the
format()/__format__()/str.format()-matter is all about nice and
flexible *text* formatting, not about binary data interpolation.
16.01.2014 10:56, Nick Coghlan wrote:
I have a different proposal: let's *just* add mod formatting to
bytes, and leave the extensible formatting system as a text only
operation.
We don't really care if bytes supports that method for version
compatibility purposes, and the deliberate flexibility of the design
makes it hard to translate into the binary domain.
So let's just not provide that - let's accept that, for the binary
domain, printf style formatting is just a better fit for the job :)
+1!
However, I am not sure if %s should be limited to bytes-like
objects. As "practicality beats purity", I would be +0.5 for
enabling the following:
- input type supports Py_buffer?
use it to collect the necessary bytes
- input type has the __bytes__() method?
use it to collect the necessary bytes
- input type has the encode() method?
raise TypeError
- otherwise:
use something equivalent to ascii(obj).encode('ascii')
(note that it would nicely format numbers + format other
object in more-or-less useful way without the fear of
encountering a non-ascii data).
another option: use str()-representation of strictly
defined types, e.g.: int, float, decimal.Decimal,
fractions.Fraction...
Cheers.
*j
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 463: Exception-catching expressions
21.02.2014 18:37, Guido van Rossum wrote: I'm put off by the ':' syntax myself (it looks to me as if someone forgot a newline somewhere) As I mentioned at python-ideas I believe that parens neutralize, at least to some extent, that unfortunate statement-ish flavor of the colon. This one has some statement-like smell: msg = seq[i] except IndexError: "nothing" But this looks better, I believe: msg = (seq[i] except IndexError: "nothing") Or even (still being my favorite): msg = seq[i] except (IndexError: "nothing") Cheers. *j ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 463: Exception-catching expressions
23.02.2014 19:51, Stefan Behnel wrote: I see a risk of interfering with in-place assignment operators, e.g. x /= y except ZeroDivisionError: 1 might not do what one could expect, because (as I assume) it would behave differently from x = x / y except ZeroDivisionError: 1 [snip] Please note that: x /= y if y else 0 also behaves differently from x = x / y if y else 0 Anyway, enclosing in parens would make that expicit and clear. Cheers. *j ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Odp: PEP 435 -- Adding an Enum type to the Python standard library
Guido van Rossum wrote: > we'd like to be able to define methods for the enum values, and the simplest > way (for the user) to define methods for the enum values would be to allow > def statements, possibly decorated, in the class. But now the implementation > has to draw a somewhat murky line between which definitions in the class > should be interpreted as enum value definitions, and which should be > interpreted as method definitions. If we had access to the syntax used for > the definition, this would be simple: assignments define items, def > statements define methods. But at run time we only see the final object > resulting from the definition, which may not even be callable in the case of > certain decorators. I am still optimistic that we can come up with a rule > that works well enough in practice (and the Zen rule to which I was referring > was, of course, "practicality beats purity"). Maybe only names that do *not* start with underscore should be treated as enum value names; and those starting with underscore could be used e.g. to define methods etc.? Python has a long tradition of treating names differently depending of that feature. *j -- Sent from phone... ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Avoiding error from repr() of recursive dictview
23.07.2013 00:01, Gregory P. Smith wrote:
On Mon, Jul 22, 2013 at 2:44 PM, Ben North
wrote:
A friend of mine, Ruadhan O'Flanagan, came across a bug which turned
out
to be the one noted in [http://bugs.python.org/issue18019 [1]],
i.e.:
>>> d={}
>>> d[42]=d.viewvalues()
>>> d
This issue has been fixed in hg; the behaviour now is that a
RuntimeError is produced for a recursive dictionary view:
>>> d={}
>>> d[42]=d.viewvalues()
>>> d # (output line-broken:)
{42: Traceback (most recent call last):
File "", line 1, in
RuntimeError: maximum recursion depth exceeded
while getting the repr of a list
Before finding this, though, I'd investigated and made a patch which
produces a similar "..." output to a recursive dictionary.
Reworking
against current 2.7, the behaviour would be:
>>> x={}
>>> x[42]=x
>>> x # existing behaviour for dictionaries:
{42: {...}}
>>> d={}
>>> d[42]=d.viewvalues()
>>> d # new behaviour:
{42: dict_values([...])}
>>> d[43]=d.viewitems()
>>> d # (output line-broken:)
{42: dict_values([..., dict_items([(42, ...), (43, ...)])]),
43: dict_items([(42, dict_values([..., ...])), (43, ...)])}
Attached is the patch, against current 2.7 branch. If there is
interest
in applying this, I will create a proper patch (changelog entry, fix
to
Lib/test/test_dictviews.py, etc.).
Mailing lists are where patches go to get lost and die. :) Post it
on an issue on bugs.python.org [4]. Given that the RuntimeError fix
has been released, your proposed ... behavior is arguably a new
feature so I'd only expect this to make sense for consideration in
3.4, not 2.7. (if accepted at all)
IMHO it's still a bug (even though not so painful as segfault) that
should also be fixed in 2.7 and 3.2/3.3.
In other cases (such as `d={}; d[42]=d; repr(d)`) Python does its best
to avoid an error -- why in this case (`d={};
d[42]=d.values(); repr(d)`) should it raise an exception?
IMHO it's an obvious oversight in implementation, not a feature that
anybody would expect.
Regards.
*j
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 447: add type.__locallookup__
Is '__locallookup__' a really good name? In Python, *local* -- especially in context of *lookups* -- usually associates with locals() i.e. a namespace of a function/method execution frame or a namespace of a class, during *definition* of that class... So '__locallookup__' can be confusing. Why not just '__getclassattribute__' or '__classlookup__', or '__classattribute__'...? Cheers. *j ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Revert #12085 fix for __del__ attribute error message
24.09.2013 10:16, Antoine Pitrou wrote: On Tue, 24 Sep 2013 18:06:15 +1000 Nick Coghlan wrote: How is it wrong? At the point where the interpreter says "This exception is now unraisable", what, precisely, is it saying that is wrong? It isn't saying "this has never been raised". It is saying, "where it is currently being processed, this exception cannot be raised". Well, it is saying it. If it's conceptually unraisable, it can't be raised. I know your point is that it is only unraisable *now*, but that's not the intuitive interpretation. And what about: Exception not propagated from > ... Or: Exception that cannot be propagated from > ... Cheers. *j ___ Python-Dev mailing list [email protected] https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] A grammatical oddity: trailing commas in argument lists -- continuation
Dear Python Developers, It is s my first post to python-dev, so let me introduce myself briefly: Jan Kaliszewski, programmer and composer, sometimes also NGO activist. Coming to the matter... The discussion started with remark by Mark Dickinson about such a syntax oddity: > def f(a, b,): ... is fine, but > def f(*, a, b,): ... is a SyntaxError Then some other similar oddities were pointed at (*args/**kwargs-related ones as well as calls like f(*, a=3,) causing SyntaxError too). References: * http://mail.python.org/pipermail/python-dev/2010-July/101636.html * http://bugs.python.org/issue9232 * http://bugs.python.org/issue10682 But yesterday both mentioned issues has been closed as rejected -- with suggestion that it would probably require a PEP to modify Python in this aspect (as there is no clear consensus). So I'd opt for re-opening the discussion -- I suppose that more people could be interested in solving the issue (at least after the end of PEP 3003 moratorium period). I think that seeing that: def f(a, b): ... def f(a, b,): ... def f(a, *, b): ... def f(a, *args, b): ... x(1, 2, 3, 4, z=5) x(1, 2, 3, 4, z=5,) x(1, *(2,3,4), z=5) ...are ok, then -- def f(a, *, b,): ... def f(a, *args, b,): ... x(1, *(2,3,4), z=5,): ... ...should be ok as well, and consequently -- def f(a, *args,): ... def f(a, **kwargs,): ... x(1, *(2,3,4),) x(1, **dict(z=6),) ...should also be ok. Please also note that Py3k's function annotations make one-def-argument- -per-line formattig style the most suitable in some cases, e.g.: def my_func( spam:"Very tasty and nutritious piece of food", ham:"For experts only", *more_spam:"Not less tasty and not less nutritious!", spammish_inquisition:"Nobody expects this!", ) -> "Spam, spam, spam, spam, spam, spam, spam, spam, spam, spam": ... Regards, Jan Kaliszewski ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] A grammatical oddity: trailing commas in argument lists -- continuation
Nick Coghlan dixit (2010-12-13, 23:25): > Function arguments are not lists. Even when separated onto multiple > lines, the closing "):" should remain on the final line with other > content. Not necessarily, IMHO. 1. What about my example with '-> xxx' return-value annotation? (especially when that annotation is a long expression) 2. There are two argument-list-formatting idioms I apply -- depending on which is more suitable in a particular case: a) when argument specs/expressions are not very long and rather if their number is not very big: def function(argument_spec1, argument_spec2, argument_spec3, argument_spec4, argument_spec5, argument_spec6): function_call(expression1, expression2, expression3, expression4, expression5, expression6) b) for long argument lists and/or argument specs/expressions (e.g. when default values or argument annotations are defined as long expressions): def function( long_argument_spec1, long_argument_spec2, long_argument_spec3, long_argument_spec4, long_argument_spec5, long_argument_spec6, ): function_call( long_expression1, long_expression2, long_expression3, long_expression4, long_expression5, long_expression6, ) Note that option 'b' is more convenient for refactorization, diffs etc. Regards, *j ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Suggested addition to PEP 8 for context managers
Paul Moore dixit (2012-04-17, 08:14): > On 16 April 2012 17:10, Nam Nguyen wrote: > > PEP 8 suggests no extra spaces after and before square brackets, and > > colons. So code like this is appropriate: > > > > a_list[1:3] > > > > But I find it less readable in the case of: > > > > a_list[pos + 1:-1] > > > > The colon is seemingly lost in the right. > > > > Would it be better to read like below? > > > > a_list[pos + 1 : -1] > > > > Any opinion? > > It says no space *before* a colon, not after. So the following should > be OK (and is what I'd use): > > a_list[pos + 1: -1] I'd prefer either: a_list[pos+1:-1] or a_list[(pos + 1):-1] Regards. *j ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] peps: Add PEP 422: Dynamic Class Decorators
Terry Reedy dixit (2012-06-05, 12:42): > On 6/5/2012 8:09 AM, nick.coghlan wrote: > > > Add PEP 422: Dynamic Class Decorators [snip] > >+So too will the following be roughly equivalent (aside from inheritance):: > >+ > >+class C: > >+__decorators__ = [deco2, deco1] > > I think you should just store the decorators in the correct order of use > +__decorators__ = [deco1, deco2] > and avoid the nonsense (time-waste) of making an indirect copy via > list_iterator and reversing it each time the attribute is used. +1. For @-syntax the inverted order seems to be somehow natural. But I feel the list order should not mimic that... *** Another idea: what about... @@dynamic_deco2 @@dynamic_deco1 class C: pass ...being an equivalent of: class C: __decorators__ = [dynamic_deco1, dynamic_deco2] ...as well as of: @@dynamic_deco2 class C: __decorators__ = [dynamic_deco1] ? Cheers. *j ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] "Decimal(2) != float(2)"???
Hello, In http://docs.python.org/release/3.2.3/reference/expressions.html#in we read: "[...] This can create the illusion of non-transitivity between supported cross-type comparisons and unsupported comparisons. For example, Decimal(2) == 2 and 2 == float(2) but Decimal(2) != float(2)." (The same is in the 3.3 docs). But: Python 3.2.3 (default, Sep 10 2012, 18:14:40) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import decimal >>> decimal.Decimal(2) == float(2) True Is it a bug in the docs or in Python itself? (I checked that in 3.2, but it may be true for 3.3 as well) Regards. *j ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Submitting PEP 422 (Simple class initialization hook) for pronouncement
11.02.2013 23:29, Nick Coghlan wrote: 3. I'm trying to avoid any custom magic specific to this method, but making it implicitly a static or class method is fairly easy if we so choose - the standard retrieval code during class creation can just bypass the descriptor machinery, and wrap it in staticmethod or classmethod if it isn't already. Given that __new__ is already implicitly static, it may be easier to follow that precedent here rather than trying to explain why an explicit @classmethod is needed in one case but not the other. Though __new__ is implicitly a *static* rather than a *class* method (so we can use it e.g. by calling object.__new__(MyClass), though -- besides -- in Py3k unbound methods have gone so the difference between static and non-static-and-non-class-methods is smaller than in Py2.x), in case of __init_class__ + super() it'd have to be called: super().__init_class__(__class__) ...and it seems to me a bit awkward. And making it implicitly a *class* rather than a *static* method whould make *impossible* to do calls such as: ExplicitAncestor.__init_class__(ExplicitDescendant) ...though I'm not sure we'd ever need such a call. If not -- implicit *class* method may be a good idea, but if we would? *** On the margin: is that middle underscore in '__init_class__' necessary? We had __metaclass__, not __meta_class__... OK, it's one world, but still we also have __getattr__, __getattribute__, __getitem__, __instancecheck__, __qualname__, __truediv__ etc. (not __get_attr__, __instance_check__ etc.). [I remember only one exception: __reduce_ex__, rather rarely used, and easy to defend against weird __reduceex__]. Wouldn't __initclass__ be readable enough? IMHO it could spare users trouble with remembering special case. Cheers. *j ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
