Client/Server based on SocketServer and Windows

2009-08-09 Thread Kiki
Hello list,

I've written a small Client/server system.
Basically, i'm expecting something like : The client sends every once
and a while a small data chunk (not more than 50 bytes) the server
receive it and print it.

Here is the server request handler :

class ThreadedTCPRequestHandlerFoo(SocketServer.BaseRequestHandler):

def handle(self):
data = self.request.recv(1024)
cur_thread = threading.currentThread()
response = "%s: %s from Foo" % (cur_thread.getName(),
data)
print response

and this is the client :

def clientPrompt(ip, port, message):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((ip, port))
while(1):
k=raw_input('#>')
sock.send(k)
print "%s\n" % k
if k == 'quit': break
sock.close()

My problem comes from that I can't send data from client more than
once without having the following Winsock error : 10053 Software
caused connection abort.
I have to restart the client each time I want to send a new message.

Could anyboy explain me why ?

Regards
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Client/Server based on SocketServer and Windows

2009-08-09 Thread Kiki
Thank you Dennis

I'm using 2 differents editor, which may be the cause of such a mess
in the indentation.

I must admitt that I lazily rely on those (not so bad indeed) editors.

"If indentation whas bad they would have tell me"

Too bad am i

Won't post misindeted code anymore.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why del is not a function or method?

2017-10-16 Thread Oren Ben-Kiki
That doesn't explain why `del` isn't a method though. Intuitively,
`my_dict.delete(some_key)` makes sense as a method. Of course, you could
also make the same case for `len` being a method... and personally I think
it would have been cleaner that way in both cases. But it is a minor issue,
if at all.

I guess the answer is a combination of "historical reasons" and "Guido's
preferences"?


On Mon, Oct 16, 2017 at 6:58 PM, Stefan Ram  wrote:

> Xue Feng  writes:
> >I wonder why 'del' is not a function or method.
>
>   Assume,
>
> x = 2.
>
>   When a function »f« is called with the argument »x«,
>   this is written as
>
> f( x )
>
>   . The function never gets to see the name »x«, just
>   its boundee (value) »2«. So, it cannot delete the
>   name »x«.
>
>   Also, the function has no access to the scope of »x«,
>   and even more so, it cannot make any changes in it.
>
>   Therefore, even a call such as
>
> f( 'x' )
>
>   will not help much.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: why del is not a function or method?

2017-10-16 Thread Oren Ben-Kiki
True... technically, "Deletion of a name removes the binding of that name
from the local or global namespace". Using `x.del()` can't do that.

That said, I would hazard to guess that `del x` is pretty rare (I have
never felt the need for it myself). Ruby doesn't even have an equivalent
operation, and doesn't seem to suffer as a result. If Python used methods
instead of global functions for `len` and `del`, and provided a
`delete_local_variable('x')` for these rare cases, that could have been a
viable solution.

So I still think it was a matter of preference rather than a pure technical
consideration. But that's all second-guessing, anyway. You'd have to ask
Guido what his reasoning was...


On Mon, Oct 16, 2017 at 7:36 PM, Ned Batchelder 
wrote:

> On 10/16/17 12:16 PM, Oren Ben-Kiki wrote:
>
>> That doesn't explain why `del` isn't a method though. Intuitively,
>> `my_dict.delete(some_key)` makes sense as a method. Of course, you could
>> also make the same case for `len` being a method... and personally I think
>> it would have been cleaner that way in both cases. But it is a minor
>> issue,
>> if at all.
>>
>> I guess the answer is a combination of "historical reasons" and "Guido's
>> preferences"?
>>
>
> It would still need to be a statement to allow for:
>
> del x
>
> since "x.del()" wouldn't affect the name x, it would affect the value x
> refers to.
>
> --Ned.
>
>
>>
>> On Mon, Oct 16, 2017 at 6:58 PM, Stefan Ram 
>> wrote:
>>
>> Xue Feng  writes:
>>>
>>>> I wonder why 'del' is not a function or method.
>>>>
>>>Assume,
>>>
>>> x = 2.
>>>
>>>When a function »f« is called with the argument »x«,
>>>this is written as
>>>
>>> f( x )
>>>
>>>. The function never gets to see the name »x«, just
>>>its boundee (value) »2«. So, it cannot delete the
>>>name »x«.
>>>
>>>Also, the function has no access to the scope of »x«,
>>>and even more so, it cannot make any changes in it.
>>>
>>>Therefore, even a call such as
>>>
>>> f( 'x' )
>>>
>>>will not help much.
>>>
>>> --
>>> https://mail.python.org/mailman/listinfo/python-list
>>>
>>>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: why del is not a function or method?

2017-10-16 Thread Oren Ben-Kiki
The first line says "The major reason is history." :-) But it also gives an
explanation: providing functionality for types that, at the time, didn't
have methods.

On Mon, Oct 16, 2017 at 8:33 PM, Lele Gaifax  wrote:

> Oren Ben-Kiki  writes:
>
> > So I still think it was a matter of preference rather than a pure
> technical
> > consideration. But that's all second-guessing, anyway. You'd have to ask
> > Guido what his reasoning was...
>
> A rationale is briefly stated in the design FAQs, see
> https://docs.python.org/3/faq/design.html#why-does-python-
> use-methods-for-some-functionality-e-g-list-index-
> but-functions-for-other-e-g-len-list
> and the next one.
>
> ciao, lele.
> --
> nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
> real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
> l...@metapensiero.it  | -- Fortunato Depero, 1929.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why does __ne__ exist?

2018-01-08 Thread Oren Ben-Kiki
I don't see a case in IEEE where (x == y) != !(x != y).
There _is_ a case where (x != x) is true (when x is NaN), but for such an
x, (x == x) will be false.

I am hard pressed to think of a case where __ne__ is actually useful.

That said, while it is true you only need one of (__eq__, __ne__), you
could make the same claim about (__lt__, __ge__) and (__le__, __gt__).
That is, in principle you could get by with only (__eq__, __le__, and
__ge__) or, if you prefer, (__ne__, __lt__, __gt__), or any other
combination you prefer.

Or you could go where C++ is doing and say that _if_ one specifies a single
__cmp__ method, it should return one of LT, EQ, GT, and this will
automatically give rise to all the comparison operators.

"Trade-offs... trafe-offs as far as the eye can see" ;-)


On Mon, Jan 8, 2018 at 4:01 PM, Thomas Nyberg  wrote:

> On 01/08/2018 12:36 PM, Thomas Jollans wrote:
> >
> > Interesting sentence from that PEP:
> >
> > "3. The == and != operators are not assumed to be each other's
> > complement (e.g. IEEE 754 floating point numbers do not satisfy this)."
> >
> > Does anybody here know how IEE 754 floating point numbers need __ne__?
>
> That's very interesting. I'd also like an answer to this. I can't wrap
> my head around why it would be true. I've just spent 15 minutes playing
> with the interpreter (i.e. checking operations on 0, -0, 7,
> float('nan'), float('inf'), etc.) and then also reading a bit about IEEE
> 754 online and I can't find any combination of examples where == and !=
> are not each others' complement.
>
> Cheers,
> Thomas
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why does __ne__ exist?

2018-01-08 Thread Oren Ben-Kiki
Ugh, right, for NaN you can have (x < y) != (x >= y) - both would be false
if one of x and y is a NaN.

But __ne__ is still useless ;-)

On Mon, Jan 8, 2018 at 4:36 PM, Thomas Nyberg  wrote:

> On 01/08/2018 03:25 PM, Oren Ben-Kiki wrote:
> > I am hard pressed to think of a case where __ne__ is actually useful.
>
> Assuming you're talking about a case specifically for IEEE 754, I'm
> starting to agree. In general, however, it certainly is useful for some
> numpy objects (as mentioned elsewhere in this thread).
>
> > That said, while it is true you only need one of (__eq__, __ne__), you
> > could make the same claim about (__lt__, __ge__) and (__le__, __gt__).
> > That is, in principle you could get by with only (__eq__, __le__, and
> > __ge__) or, if you prefer, (__ne__, __lt__, __gt__), or any other
> > combination you prefer.
>
> This isn't true for IEEE 754. For example:
>
> >>> float('nan') < 0
> False
> >>> float('nan') > 0
> False
> >>> float('nan') == 0
> False
>
> Also there are many cases where you don't have a < b OR a >= b. For
> example, subsets don't follow this.
>
> > "Trade-offs... trafe-offs as far as the eye can see" ;-)
>
> Yes few things in life are free. :)
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Why does __ne__ exist?

2018-01-08 Thread Oren Ben-Kiki
Good points. Well, this is pretty academic at this point - I don't think
anyone would seriously choose to obsolete __ne__, regardless of whether it
is absolutely necessary or not.

On Mon, Jan 8, 2018 at 4:51 PM, Thomas Jollans  wrote:

> On 2018-01-08 15:25, Oren Ben-Kiki wrote:
> > I don't see a case in IEEE where (x == y) != !(x != y).
> > There _is_ a case where (x != x) is true (when x is NaN), but for such an
> > x, (x == x) will be false.
> >
> > I am hard pressed to think of a case where __ne__ is actually useful.
>
> See my earlier email and/or PEP 207. (tl;dr: non-bool return values)
>
> >
> > That said, while it is true you only need one of (__eq__, __ne__), you
> > could make the same claim about (__lt__, __ge__) and (__le__, __gt__).
> > That is, in principle you could get by with only (__eq__, __le__, and
> > __ge__) or, if you prefer, (__ne__, __lt__, __gt__), or any other
> > combination you prefer.
>
> PEP 207: "The above mechanism is such that classes can get away with not
> implementing either __lt__ and __le__ or __gt__ and __ge__."
>
>
> >
> > Or you could go where C++ is doing and say that _if_ one specifies a
> single
> > __cmp__ method, it should return one of LT, EQ, GT, and this will
> > automatically give rise to all the comparison operators.
>
> This used to be the case. (from version 2.1 to version 2.7, AFAICT)
>
>
> >
> > "Trade-offs... trafe-offs as far as the eye can see" ;-)
> >
> >
> > On Mon, Jan 8, 2018 at 4:01 PM, Thomas Nyberg  wrote:
> >
> >> On 01/08/2018 12:36 PM, Thomas Jollans wrote:
> >>>
> >>> Interesting sentence from that PEP:
> >>>
> >>> "3. The == and != operators are not assumed to be each other's
> >>> complement (e.g. IEEE 754 floating point numbers do not satisfy this)."
> >>>
> >>> Does anybody here know how IEE 754 floating point numbers need __ne__?
> >>
> >> That's very interesting. I'd also like an answer to this. I can't wrap
> >> my head around why it would be true. I've just spent 15 minutes playing
> >> with the interpreter (i.e. checking operations on 0, -0, 7,
> >> float('nan'), float('inf'), etc.) and then also reading a bit about IEEE
> >> 754 online and I can't find any combination of examples where == and !=
> >> are not each others' complement.
> >>
> >> Cheers,
> >> Thomas
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Behavior of auto in Enum and Flag.

2017-04-02 Thread Oren Ben-Kiki
The current behavior of `auto` is to pick a value which is one plus the
previous value.

It would probably be better if `auto` instead picked a value that is not
used by any named member (either the minimal unused value, or the minimal
higher than the previous value). That is, in this simple case:

class MyEnum(Enum):
FOO = 1
BAR = auto()
BAZ = 2

It would be far better for BAR to get the value 3 rather than today's value
2.

In the less simple case of:

class MyEnum(Enum):
FOO = 2
BAR = auto()
BAZ = 3

Then BAR could be either 1 or 4 - IMO, 1 would be better, but 4 works as
well.

After all, `auto` is supposed to be used when:

"If the exact value is unimportant you may use auto instances and an
appropriate value will be chosen for you."

Choosing a value that conflicts with BAZ in above cases doesn't seem
"appropriate" for a value that is "unimportant".

The docs also state "Care must be taken if you mix auto with other values."
- fair enough. But:

First, why require "care" if the code can take care of the issue for us?

Second, the docs don't go into further detail about what exactly to avoid.
In particular, the docs do not state that the automatic value will only
take into account the previous values, and will ignore following values.

However, this restriction is baked into the current implementation:
It is not possible to just override `_generate_next_value_` to skip past
named values which were not seen yet, because the implementation only
passes it the list of previous values.

I propose that:

1. The documentation will be more explicit about the way `auto` behaves in
the presence of following values.

2. The default behavior of `auto` would avoid generating a conflict with
following values.

3. Whether `auto` chooses (A) the minimal unused value higher than the
previous value, or (B) the minimal overall unused value, or (C) some other
strategy, would depend on the specific implementation.

3. To allow for this, the implementation will include a
`_generate_auto_value_` which will take both the list of previous ("last")
values (including auto values) and also a second list of the following
("next") values (excluding auto values).

4. If the class implements `_generate_next_value_`, then
`_generate_auto_value_` will invoke `_generate_next_value_` with the
concatenation of both lists (following values first, preceding values
second), to maximize compatibility with existing code.

Thanks,

Oren Ben-Kiki
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Behavior of auto in Enum and Flag.

2017-04-02 Thread Oren Ben-Kiki
While "the current behaviour is compliant with what the docs say" is true,
saying "as such, I would be disinclined to change the code" misses the
point.

The current documentation allows for multiple behaviors. The current
implementation has an chosen to add an arbitrary undocumented restriction
on the behavior, which has a usability issue. Even worse, for no clear
reason, the current implementation forces _all_ implementations to suffer
from the same usability issue.

The proposed behavior is _also_ compliant with the current documentation,
and does not suffer from this usability issue. The proposed implementation
is compatible with existing code bases, and allows for "any" other
implementation to avoid this issue.

That is, I think that instead of enshrining the current implementation's
undocumented and arbitrary restriction, by explicitly adding it to the
documentation, we should instead remove this arbitrary restriction from the
implementation, and only modify the documentation to clarify this
restriction is gone.

Oren.

On Mon, Apr 3, 2017 at 8:38 AM, Chris Angelico  wrote:

> On Mon, Apr 3, 2017 at 2:49 PM, Oren Ben-Kiki 
> wrote:
> > "If the exact value is unimportant you may use auto instances and an
> > appropriate value will be chosen for you."
> >
> > Choosing a value that conflicts with BAZ in above cases doesn't seem
> > "appropriate" for a value that is "unimportant".
> >
> > The docs also state "Care must be taken if you mix auto with other
> values."
> > - fair enough. But:
> >
> > First, why require "care" if the code can take care of the issue for us?
> >
> > Second, the docs don't go into further detail about what exactly to
> avoid.
> > In particular, the docs do not state that the automatic value will only
> > take into account the previous values, and will ignore following values.
>
> Sounds to me like the current behaviour is compliant with what the
> docs say, and as such, I would be disinclined to change the code.
> Perhaps a documentation clarification would suffice?
>
> """Care must be taken if you mix auto with other values. In
> particular, using auto() prior to explicitly-set values may result in
> conflicts."""
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Behavior of auto in Enum and Flag.

2017-04-03 Thread Oren Ben-Kiki
On Mon, Apr 3, 2017 at 11:03 AM, Ethan Furman  wrote:

> Python code is executed top-down.  First FOO, then BAR, then BAZ.  It is
> not saved up and executed later in random order.  Or, put another way, the
> value was appropriate when it was chosen -- it is not the fault of auto()
> that the user chose a conflicting value (hence why care should be taken).


This is not to say that there's no possible workaround for this - the code
could pretty easily defer invocation of _generate_next_macro_ until after
the whole class was seen. It would still happen in order (since members are
an ordered dictionary these days).

So it is a matter of conflicting values - what would be more "Pythonic":
treating auto as executed immediately, or avoiding conflicts between auto
and explicit values.


> 1. The documentation will be more explicit about the way `auto` behaves in
> the presence of following value



> I can do that.
>

Barring changing the way auto works, that would be best ("explicit is
better than implicit" and all that ;-)


> 2. The default behavior of `auto` would avoid generating a conflict with
>> following values.
>>
>
> I could do that, but I'm not convinced it's necessary, plus there would be
> backwards compatibility constraints at this point.


"Necessity" depends on the judgement call above.

As for backward compatibility, the docs are pretty clear about "use auto
when you don't care about the value"... and Enum is pretty new, so there's
not _that_ much code that relies on "implementation specific" details.

*If* backward compatibility is an issue here, then the docs might as well
specify "previous value plus 1, or 1 if this is the first value" as the
"standard" behavior, and be done.

This has the advantage of being deterministic and explicit, so people would
be justified in relying on it. It would still have to be accompanied by
saying "auto() can only consider previous values, not following ones".


> This might work for you (untested):
>
> def _generate_next_value_(name, start, count, previous_values):
> if not count:
> return start or 1
> previous_values.sort()
> last_value = previous_values[-1]
> if last_value < 1000:
> return 1001
> else:
> return last_value + 1


This assumes no following enum values have values > 1000 (or some
predetermined constant), which doesn't work in my particular case, or in
the general case. But yes, this might solve the problem for some people.


> 3. To allow for this, the implementation will include a
>> `_generate_auto_value_` which will take both the list of previous ("last")
>> values (including auto values) and also a second list of the following
>> ("next") values (excluding auto values).
>>
>
> No, I'm not interested in doing that.  I currently have that kind of code
> in aenum[1] for 2.7 compatibility, and it's a nightmare to maintain.
>

Understood. Another alternative would be to have something like
_generate_next_value_ex_ with the additional argument (similar to
__reduce_ex__), which isn't ideal either.

Assuming you buy into my "necessity" claim, that is...

Thanks,

Oren.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Behavior of auto in Enum and Flag.

2017-04-03 Thread Oren Ben-Kiki
On Mon, Apr 3, 2017 at 7:43 PM, Chris Angelico  wrote:

> Here's a counter-example that supports the current behaviour:
>
> >>> from enum import IntFlag, auto
> >>> class Spam(IntFlag):
> ... FOO = auto()
> ... BAR = auto()
> ... FOOBAR = FOO | BAR
> ... SPAM = auto()
> ... HAM = auto()
> ... SPAMHAM = SPAM | HAM
> ...
>

Ugh, good point - I didn't consider that use case, I see how it would be
nasty to implement.

I guess just improving the documentation is called for, then...

Thanks,

Oren.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Appending data to a json file

2017-04-03 Thread Oren Ben-Kiki
You _can_ just extend a JSON file without loading it, but it will not be
"fun".

Say the JSON file contains a top-level array. The final significant
character in it would be a ']'. So, you can read just a reasonably-sized
block from the end of the file, find the location of the final ']',
overwrite it with a ',' followed by your additional array entry/entries,
with a final ']'.

If the JSON file contains a top-level object, the final significant
character would be a '}'. Overwrite it with a ',' followed by your
additional object key/value pairs, with a final '}'.

Basically, if what you want to append is of the same kind as the content of
the file (array appended to array, or object to object):

- Locate final significant character in the file
- Locate first significant character in your appended data, replace it with
a ','
- Overwrite the final significant character in the file with your patched
data

It isn't elegant or very robust, but if you want to append to a very large
JSON array (for example, some log file?), then it could be very efficient
and effective.

Or, you could use YAML ;-)


On Tue, Apr 4, 2017 at 8:31 AM, dieter  wrote:

> Dave  writes:
>
> > I created a python program that gets data from a user, stores the data
> > as a dictionary in a list of dictionaries.  When the program quits, it
> > saves the data file.  My desire is to append the new data to the
> > existing data file as is done with purely text files.
>
> Usually, you cannot do that:
> "JSON" stands for "JavaScript Object Notation": it is a text representation
> for a single (!) JavaScript object. The concatenation of two
> JSON representations is not a valid JSON representation.
> Thus, you cannot expect that after such a concatenation, a single
> call to "load" will give you back complete information (it might
> be that a sequence of "load"s works).
>
> Personally, I would avoid concatenated JSON representations.
> Instead, I would read in (i.e. "load") the existing data,
> construct a Python object from the old and the new data (likely in the form
> of a list) and then write it out (i.e. "dump") again.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Difference in behavior of GenericMeta between 3.6.0 and 3.6.1

2017-07-16 Thread Oren Ben-Kiki
TL;DR: We need improved documentation of the way meta-classes behave for
generic classes, and possibly reconsider the way "__setattr__" and
"__getattribute__" behave for such classes.

I am using meta-programming pretty heavily in one of my projects.
It took me a while to figure out the dance between meta-classes and generic
classes in Python 3.6.0.

I couldn't find good documentation for any of this (if anyone has a good
link, please share...), but with a liberal use of "print" I managed to
reverse engineer how this works. The behavior isn't intuitive but I can
understand the motivation (basically, "type annotations shall not change
the behavior of the program").

For the uninitiated:

* It turns out that there are two kinds of instances of generic classes:
the "unspecialized" class (basically ignoring type parameters), and
"specialized" classes (created when you write "Foo[Bar]", which know the
type parameters, "Bar" in this case).

* This means the meta-class "__new__" method is called sometimes to create
the unspecialized class, and sometimes to create a specialized one - in the
latter case, it is called with different arguments...

* No object is actually an instance of the specialized class; that is, the
"__class__" of an instance of "Foo[Bar]" is actually the unspecialized
"Foo" (which means you can't get the type parameters by looking at an
instance of a generic class).

So far, so good, sort of. I implemented my meta-classes to detect whether
they are creating a "specialized" or "unspecialized" class and behave
accordingly.

However, these meta-classes stopped working when switching to Python 3.6.1.
The reason is that in Python 3.6.1, a "__setattr__" implementation was
added to "GenericMeta", which redirects the setting of an attribute of a
specialized class instance to set the attribute of the unspecialized class
instance instead.

This causes code such as the following (inside the meta-class) to behave in
a mighty confusing way:

if is-not-specialized:
cls._my_attribute = False
else:  # Is specialized:
cls._my_attribute = True
assert cls._my_attribute  # Fails!

As you can imagine, this caused us some wailing and gnashing of teeth,
until we figured out (1) that this was the problem and (2) why it was
happening.

Looking into the source code in "typing.py", I see that I am not the only
one who had this problem. Specifically, the implementers of the "abc"
module had the exact same problem. Their solution was simple: the
"GenericMeta.__setattr__" code explicitly tests whether the attribute name
starts with "_abc_", in which case it maintains the old behavior.

Obviously, I should not patch the standard library typing.py to preserve
"_my_attribute". My current workaround is to derive from GenericMeta,
define my own "__setattr__", which preserves the old behavior for
"_my_attribute", and use that instead of the standard GenericMeta
everywhere.

My code now works in both 3.6.0 and 3.6.1. However, I think the following
points are worth fixing and/or discussion:

* This is a breaking change, but it isn't listed in
https://www.python.org/downloads/release/python-361/ - it should probably
be listed there.

* In general it would be good to have some documentation on the way that
meta-classes and generic classes interact with each other, as part of the
standard library documentation (apologies if it is there and I missed it...
link?)

* I'm not convinced the new behavior is a better default. I don't recall
seeing a discussion about making this change, possibly I missed it (link?)

* There is a legitimate need for the old behavior (normal per-instance
attributes). For example, it is needed by the "abc" module (as well as my
project). So, some mechanism should be recommended (in the documentation)
for people who need the old behavior.

* Separating between "really per instance" attributes and "forwarded to the
unspecialized instance" attributes based on their prefix seems to violate
"explicit is better than implicit". For example, it would have been
explicit to say "cls.__unspecialized__.attribute" (other explicit
mechanisms are possible).

* Perhaps the whole notion of specialized vs. unspecialized class instances
needs to be made more explicit in the GenericMeta API...

* Finally and IMVHO most importantly, it is *very* confusing to override
"__setattr__" and not override "__getattribute__" to match. This gives rise
to code like "cls._foo = True; assert cls._foo" failing. This feels
wrong And presumably fixing the implementation so that
"__getattribute__" forwards the same set of attributes to the
"unspecialized" instance wouldn't break any code... Other than code that
already broken due to the new functionality, that is.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Difference in behavior of GenericMeta between 3.6.0 and 3.6.1

2017-07-16 Thread Oren Ben-Kiki
Yes, it sort-of makes sense... I'll basically re-post my question there.

Thanks for the link!

Oren.


On Sun, Jul 16, 2017 at 4:29 PM, Peter Otten <__pete...@web.de> wrote:

> Oren Ben-Kiki wrote:
>
> > TL;DR: We need improved documentation of the way meta-classes behave for
> > generic classes, and possibly reconsider the way "__setattr__" and
> > "__getattribute__" behave for such classes.
>
> The typing module is marked as "provisional", so you probably have to live
> with the incompatibilities.
>
> As to your other suggestions/questions, I'm not sure where the actual
> discussion is taking place -- roughly since the migration to github python-
> dev and bugs.python.org are no longer very useful for outsiders to learn
> what's going on.
>
> A random walk over the github site found
>
> https://github.com/python/typing/issues/392
>
> Maybe you can make sense of that?
>
> Personally, I'm not familiar with the evolving type system and still
> wondering whether I should neglect or reject...
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list