Re: [Python-Dev] Idea: Dictionary references

2015-12-18 Thread Steven D'Aprano
On Thu, Dec 17, 2015 at 09:30:24AM -0800, Andrew Barnert via Python-Dev wrote:
> On Dec 17, 2015, at 07:38, Franklin? Lee  
> wrote:
> > 
> > The nested dictionaries are only for nested scopes (and inner
> > functions don't create nested scopes). Nested scopes will already
> > require multiple lookups in parents.
> 
> I think I understand what you're getting at here, but it's a really 
> confusing use of terminology. In Python, and in programming in 
> general, nested scopes refer to exactly inner functions (and classes) 
> being lexically nested and doing lookup through outer scopes. The fact 
> that this is optimized at compile time to FAST vs. CELL vs. 
> GLOBAL/NAME, cells are optimized at function-creation time, and only 
> global and name have to be resolved at the last second doesn't mean 
> that there's no scoping, or some other form of scoping besides 
> lexical. The actual semantics are LEGB, even if L vs. E vs. GB and E 
> vs. further-out E can be optimized.

In Python 2, the LOAD_NAME byte-code can return a local, even though it 
normally doesn't:

py> x = "global"
py> def spam():
... exec "x = 'local'"
... print x
...
py> spam()
local
py> x == 'global'
True


If we look at the byte-code, we see that the lookup is *not* optimized 
to inspect locals only (LOAD_FAST), but uses the regular LOAD_NAME that 
normally gets used for globals and builtins:

py> import dis
py> dis.dis(spam)
  2   0 LOAD_CONST   1 ("x = 'local'")
  3 LOAD_CONST   0 (None)
  6 DUP_TOP
  7 EXEC_STMT

  3   8 LOAD_NAME0 (x)
 11 PRINT_ITEM
 12 PRINT_NEWLINE
 13 LOAD_CONST   0 (None)
 16 RETURN_VALUE



> What you're talking about here is global lookups falling back to 
> builtin lookups. There's no more general notion of nesting or scoping 
> involved, so why use those words?

I'm not quite sure about this. In principle, every name lookup looks in 
four scopes, LEGB as you describe above:

- locals
- non-locals, a.k.a. enclosing or lexical scope(s)
- globals (i.e. the module)
- builtins


although Python can (usually?) optimise away some of those lookups. The 
relationship of locals to enclosing scopes, and to globals in turn, 
involve actual nesting of indented blocks in Python, but that's not 
necessarily the case. One might imagine a hypothetical capability for 
manipulating scopes directly, e.g.:

def spam(): ...
def ham(): ...
set_enclosing(ham, spam)
# like:
# def spam():
# def ham(): ...

The adventurous or fool-hardy can probably do something like that now 
with byte-code hacking :-)

Likewise, one might consider that builtins is a scope which in some 
sense encloses the global scope. Consider it a virtual code block that 
is outdented from the top-level scope :-)


> So, trying to generalize global vs. builtin to a general notion of 
> "nested scope" that isn't necessary for builtins and doesn't work for 
> anything else seems like overcomplicating things for no benefit.

Well, putting aside the question of whether this is useful or not, and 
putting aside efficiency concerns, let's just imagine a hypothetical 
implementation where name lookups used ChainMaps instead of using 
separate LOAD_* lookups of special dicts. Then a function could set up a 
ChainMap:

function.__scopes__ = ChainMap(locals, enclosing, globals, builtins)

and a name lookup for (say) "x" would always be a simple:

function.__scopes__["x"]

Of course this would be harder to optimize, and hence probably slower, 
than the current arrangement, but I think it would allow some 
interesting experiments with scoping rules:

ChainMap(locals, enclosing, globals, application_globals, builtins)


You could implement dynamic scoping by inserting the caller's __scopes__ 
ChainMap into the front of the called function's ChainMap. And attribute 
lookups would be something like this simplified scope:

ChainMap(self.__dict__, type(self).__dict__)

to say nothing of combinations of the two.

So I think there's something interesting here, even if we don't want to 
use it in production code, it would make for some nice experiments.


-- 
Steve
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Typo in PEP-0423

2015-12-18 Thread Tim Legrand
Hi guys,

It's said on the Python repos page  that this
mailing list is the official maintainer of the peps repo
, so here I am writing my question.

There's is a typo in the PEP-0423 description, in which it is said:

"See Registering with the Package Index
 [27] for details."

but the provided link is broken (error 404).

In the source file
 written
by Guido van Rossum, the link's placeholder is "Registering with the
Package Index".

What is the right link ?

Thanks,
Tim
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Szieberth Ádám
Hi Developers!

This is my first post. Please excuse me my poor English. If anyone is
interested, I wrote a small introduction on my homepage. Link is at the bottom.

This post is about how to effectively implement the new asynchronous context
manager in a typical network server.

I would appreciate and welcome any confirmation or critics whether my thinking
is right or wrong. Thanks in advance!

So, a typical server main code I used to see around is like this:

srv = loop.run_until_complete(create_server(handler, host, port))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
# other tear down code may be here
srv.close()
loop.run_until_complete(srv.wait_closed())
loop.close()

Note that `create_server()` here is not necessary
`BaseEventLoop.create_server()`.

The above code is not prepared to handle `OSError`s or any other `Exception`s
(including a `KeyboardInterrupt` by a rapid Ctr+C) when setting up the server,
it just prints the traceback to the console which is not user friendly.
Moreover, I would expect from a server to handle the SIGTERM signal as well
and tell its clients that it stops serving when not force killed.

How the main code should create server, maintain the serving, deal with errors
and close properly both the connections and the event loop when exiting
without letting pending tasks around is not trivial. There are many questions
on SO and other places of the internet regarding of this problem.

My idea was to provide a simple code which is robust in terms of these
concerns by profiting from the new asynchronous context manager pattern.

The code of the magic methods of a typical awaitable `CreateServer` object
seems rather trivial:

async def __aenter__(self):
self.server = await self
return self.server

async def __aexit__(self, exc_type, exc_value, traceback):
# other tear down code may be here
self.server.close()
await self.server.wait_closed()

However, to make it work, a task has to be created:

async def server_task():
async with CreateServer(handler, host, port) as srv:
await asyncio.Future()  # wait forever

I write some remarks regarding the above code to the end of this post. Note
that `srv` is unreachable from outside which could be a problem in some cases.
What is unavoidable: this task has to get cancelled explicitely by the main
code which should look like this:

srvtsk = loop.create_task(server_task())

signal.signal(signal.SIGTERM, lambda si, fr: loop.call_soon(srvtsk.cancel))

while True:
try:
loop.run_until_complete(srvtsk)
except KeyboardInterrupt:
srvtsk.cancel()
except asyncio.CancelledError:
break
except Exception as err:
print(err)
break
loop.close()

Note that when `CancelledError` gets raised, the tear down process is already
done.

Remarks:

* It would be nice to have an `asyncio.wait_forever()` coroutine for dummy
  context bodies.
* Moreover, I also imagined an `BaseEventLoop.create_context_task(awithable,
  body_coro_func=None)` method. The `body_coro_func` should default to
  `asyncio.wait_forever()`, otherwise it should get whatever is returned by
  `__aenter__` as a single argument. The returned Task object should also
  provide a reference to that object.

Best regards,
Ádám

(http://szieberthadam.github.io/)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] pypi simple index

2015-12-18 Thread Carlos Barera
Thanks Rob!



On Thu, Dec 17, 2015 at 7:36 PM, Robert Collins 
wrote:

>
>
> On 18 December 2015 at 06:13, Carlos Barera 
> wrote:
>
>> Hi,
>>
>> I'm using install_requires in setup.py to specify a specific package my
>> project is dependant on.
>> When running python setup.py install, apparently the simple index is used
>> as an older package is taken from pypi. While
>>
>
> What's happening here is that easy-install is triggering - which does not
> support wheels. Use 'pip install .' instead.
>
>
>> in https://pypi.python.org/pypi, there's a newer package.
>> When installing directly using pip, the latest package is installed
>> successfully.
>> I noticed that the new package is only available as a wheel and older
>> versions of setup tools won't install wheels for install_requires.
>> However, upgrading setuptools didn't help.
>>
>> Several questions:
>> 1. What's the difference between the pypi simple index and the general
>> pypi index?
>>
>
> The '/simple' API is for machine consumption, /pypi is for humans, other
> than that there should be not be any difference.
>
>
>> 2. Why is setup.py defaulting to the simple index?
>>
>
> Because it is the only index :).
>
>
>> 3. How can I make the setup.py triggered install use the main pypi index
>> instead of simple
>>
>
> You can't - the issue is not the index being consulted, but your use of
> 'python setup.py install' which does not support  wheels.
>
> Cheers,
> Rob
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Andrew Svetlov
I my asyncio code typical initialization/finalization procedures are
much more complicated.
I doubt if common code can be extracted into asyncio.
Personally I don't feel the need for `wait_forever()` or
`loop.creae_context_task()`.

But even if you need it you may create it from scratch easy, isn't it?

On Fri, Dec 18, 2015 at 3:58 PM, Szieberth Ádám  wrote:
> Hi Developers!
>
> This is my first post. Please excuse me my poor English. If anyone is
> interested, I wrote a small introduction on my homepage. Link is at the 
> bottom.
>
> This post is about how to effectively implement the new asynchronous context
> manager in a typical network server.
>
> I would appreciate and welcome any confirmation or critics whether my thinking
> is right or wrong. Thanks in advance!
>
> So, a typical server main code I used to see around is like this:
>
> srv = loop.run_until_complete(create_server(handler, host, port))
> try:
> loop.run_forever()
> except KeyboardInterrupt:
> pass
> finally:
> # other tear down code may be here
> srv.close()
> loop.run_until_complete(srv.wait_closed())
> loop.close()
>
> Note that `create_server()` here is not necessary
> `BaseEventLoop.create_server()`.
>
> The above code is not prepared to handle `OSError`s or any other `Exception`s
> (including a `KeyboardInterrupt` by a rapid Ctr+C) when setting up the server,
> it just prints the traceback to the console which is not user friendly.
> Moreover, I would expect from a server to handle the SIGTERM signal as well
> and tell its clients that it stops serving when not force killed.
>
> How the main code should create server, maintain the serving, deal with errors
> and close properly both the connections and the event loop when exiting
> without letting pending tasks around is not trivial. There are many questions
> on SO and other places of the internet regarding of this problem.
>
> My idea was to provide a simple code which is robust in terms of these
> concerns by profiting from the new asynchronous context manager pattern.
>
> The code of the magic methods of a typical awaitable `CreateServer` object
> seems rather trivial:
>
> async def __aenter__(self):
> self.server = await self
> return self.server
>
> async def __aexit__(self, exc_type, exc_value, traceback):
> # other tear down code may be here
> self.server.close()
> await self.server.wait_closed()
>
> However, to make it work, a task has to be created:
>
> async def server_task():
> async with CreateServer(handler, host, port) as srv:
> await asyncio.Future()  # wait forever
>
> I write some remarks regarding the above code to the end of this post. Note
> that `srv` is unreachable from outside which could be a problem in some cases.
> What is unavoidable: this task has to get cancelled explicitely by the main
> code which should look like this:
>
> srvtsk = loop.create_task(server_task())
>
> signal.signal(signal.SIGTERM, lambda si, fr: 
> loop.call_soon(srvtsk.cancel))
>
> while True:
> try:
> loop.run_until_complete(srvtsk)
> except KeyboardInterrupt:
> srvtsk.cancel()
> except asyncio.CancelledError:
> break
> except Exception as err:
> print(err)
> break
> loop.close()
>
> Note that when `CancelledError` gets raised, the tear down process is already
> done.
>
> Remarks:
>
> * It would be nice to have an `asyncio.wait_forever()` coroutine for dummy
>   context bodies.
> * Moreover, I also imagined an `BaseEventLoop.create_context_task(awithable,
>   body_coro_func=None)` method. The `body_coro_func` should default to
>   `asyncio.wait_forever()`, otherwise it should get whatever is returned by
>   `__aenter__` as a single argument. The returned Task object should also
>   provide a reference to that object.
>
> Best regards,
> Ádám
>
> (http://szieberthadam.github.io/)
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com



-- 
Thanks,
Andrew Svetlov
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Typo in PEP-0423

2015-12-18 Thread Guido van Rossum
Which of the top links of this query do you think it should be?

https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8

On Fri, Dec 18, 2015 at 3:51 AM, Tim Legrand 
wrote:

> Hi guys,
>
> It's said on the Python repos page  that this
> mailing list is the official maintainer of the peps repo
> , so here I am writing my question.
>
> There's is a typo in the PEP-0423 description, in which it is said:
>
> "See Registering with the Package Index
>  [27] for
> details."
>
> but the provided link is broken (error 404).
>
> In the source file
>  written
> by Guido van Rossum, the link's placeholder is "Registering with the
> Package Index".
>
> What is the right link ?
>
> Thanks,
> Tim
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>
>


-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Guido van Rossum
I agree with Andrew that there are too many different scenarios and
requirements to make this a useful library function. Some notes on the
actual code you posted:

- Instead of calling signal.signal() yourself, you should use
loop.add_signal_handler(). It makes sure your signal handler doesn't run
while another handler is already running.

- If you add a handler for SIGINT you can control what happens when the
user hits ^C (again, ensuring the handler already running isn't interrupted
halfway through).

- I'm unclear on why you want a wait_forever() instead of using
loop.run_forever(). Can you clarify?

- In theory, instead of waiting for a Future that is cancelled by a
handler, you should be able to use asyncio.sleep() with a very large number
(e.g. a million seconds). Your handler could then just call loop.stop().
However, I just tested this and it raises "RuntimeError: Event loop stopped
before Future completed." so ignore this until we've fixed it. :-)

On Fri, Dec 18, 2015 at 5:58 AM, Szieberth Ádám  wrote:

> Hi Developers!
>
> This is my first post. Please excuse me my poor English. If anyone is
> interested, I wrote a small introduction on my homepage. Link is at the
> bottom.
>
> This post is about how to effectively implement the new asynchronous
> context
> manager in a typical network server.
>
> I would appreciate and welcome any confirmation or critics whether my
> thinking
> is right or wrong. Thanks in advance!
>
> So, a typical server main code I used to see around is like this:
>
> srv = loop.run_until_complete(create_server(handler, host, port))
> try:
> loop.run_forever()
> except KeyboardInterrupt:
> pass
> finally:
> # other tear down code may be here
> srv.close()
> loop.run_until_complete(srv.wait_closed())
> loop.close()
>
> Note that `create_server()` here is not necessary
> `BaseEventLoop.create_server()`.
>
> The above code is not prepared to handle `OSError`s or any other
> `Exception`s
> (including a `KeyboardInterrupt` by a rapid Ctr+C) when setting up the
> server,
> it just prints the traceback to the console which is not user friendly.
> Moreover, I would expect from a server to handle the SIGTERM signal as well
> and tell its clients that it stops serving when not force killed.
>
> How the main code should create server, maintain the serving, deal with
> errors
> and close properly both the connections and the event loop when exiting
> without letting pending tasks around is not trivial. There are many
> questions
> on SO and other places of the internet regarding of this problem.
>
> My idea was to provide a simple code which is robust in terms of these
> concerns by profiting from the new asynchronous context manager pattern.
>
> The code of the magic methods of a typical awaitable `CreateServer` object
> seems rather trivial:
>
> async def __aenter__(self):
> self.server = await self
> return self.server
>
> async def __aexit__(self, exc_type, exc_value, traceback):
> # other tear down code may be here
> self.server.close()
> await self.server.wait_closed()
>
> However, to make it work, a task has to be created:
>
> async def server_task():
> async with CreateServer(handler, host, port) as srv:
> await asyncio.Future()  # wait forever
>
> I write some remarks regarding the above code to the end of this post. Note
> that `srv` is unreachable from outside which could be a problem in some
> cases.
> What is unavoidable: this task has to get cancelled explicitely by the main
> code which should look like this:
>
> srvtsk = loop.create_task(server_task())
>
> signal.signal(signal.SIGTERM, lambda si, fr:
> loop.call_soon(srvtsk.cancel))
>
> while True:
> try:
> loop.run_until_complete(srvtsk)
> except KeyboardInterrupt:
> srvtsk.cancel()
> except asyncio.CancelledError:
> break
> except Exception as err:
> print(err)
> break
> loop.close()
>
> Note that when `CancelledError` gets raised, the tear down process is
> already
> done.
>
> Remarks:
>
> * It would be nice to have an `asyncio.wait_forever()` coroutine for dummy
>   context bodies.
> * Moreover, I also imagined an
> `BaseEventLoop.create_context_task(awithable,
>   body_coro_func=None)` method. The `body_coro_func` should default to
>   `asyncio.wait_forever()`, otherwise it should get whatever is returned by
>   `__aenter__` as a single argument. The returned Task object should also
>   provide a reference to that object.
>
> Best regards,
> Ádám
>
> (http://szieberthadam.github.io/)
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
_

Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread R. David Murray
On Fri, 18 Dec 2015 18:29:35 +0200, Andrew Svetlov  
wrote:
> I my asyncio code typical initialization/finalization procedures are
> much more complicated.
> I doubt if common code can be extracted into asyncio.
> Personally I don't feel the need for `wait_forever()` or
> `loop.creae_context_task()`.
> 
> But even if you need it you may create it from scratch easy, isn't it?

In my own asyncio code I wrote a generic context manager to hold
references to all the top level tasks my ap needs, which automatically
handles the teardown when loop.stop() is called from my SIGTERM
signal handler.

However, (and here we get to the python-dev content of this post :), I
think we are too early in the uptake of asyncio to be ready to say what
additional high-level features are well defined enough and useful enough
to become part of the standard library.  In any case discussions like
this really belong on the asyncio-specific mailing list, which I gather
is the python-tulip Google Group (I suppose I really ought to sign up...)

--David
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Summary of Python tracker Issues

2015-12-18 Thread Python tracker

ACTIVITY SUMMARY (2015-12-11 - 2015-12-18)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the issue.
Do NOT respond to this message.

Issues counts and deltas:
  open5324 (+27)
  closed 32341 (+38)
  total  37665 (+65)

Open issues with patches: 2344 


Issues opened (45)
==

#7283: test_site failure when .local/lib/pythonX.Y/site-packages hasn
http://bugs.python.org/issue7283  reopened by serhiy.storchaka

#25591: refactor imaplib tests
http://bugs.python.org/issue25591  reopened by maciej.szulik

#25843: lambdas on the same line may incorrectly share code objects
http://bugs.python.org/issue25843  opened by Tijs Van Oevelen

#25844: Pylauncher, launcher.c: Assigning NULL to a pointer instead of
http://bugs.python.org/issue25844  opened by Alexander Riccio

#25846: Use of Py_ARRAY_LENGTH on pointer in posixmodule.c, win32_wchd
http://bugs.python.org/issue25846  opened by Alexander Riccio

#25847: CPython not using Visual Studio code analysis!
http://bugs.python.org/issue25847  opened by Alexander Riccio

#25848: Tkinter tests failed on Windows buildbots
http://bugs.python.org/issue25848  opened by serhiy.storchaka

#25849: files, opened in unicode (text): write() returns symbols count
http://bugs.python.org/issue25849  opened by mmarkk

#25850: Building extensions with MSVC 2015 Express fails
http://bugs.python.org/issue25850  opened by Sami Salonen

#25852: smtplib's SMTP.connect() should store the server name in ._hos
http://bugs.python.org/issue25852  opened by labrat

#25853: Compile error with pytime.h - struct timespec declared inside 
http://bugs.python.org/issue25853  opened by jamespharvey20

#25856: The __module__ attribute of non-heap classes is not interned
http://bugs.python.org/issue25856  opened by serhiy.storchaka

#25858: Structure field size/ofs __str__ wrong with large size fields
http://bugs.python.org/issue25858  opened by Charles Machalow

#25859: EOFError in test_nntplib.NetworkedNNTPTests.test_starttls()
http://bugs.python.org/issue25859  opened by martin.panter

#25860: os.fwalk() silently skips remaining directories when error occ
http://bugs.python.org/issue25860  opened by Samson Lee

#25862: TextIOWrapper assertion failure after read() and SEEK_CUR
http://bugs.python.org/issue25862  opened by martin.panter

#25863: ISO-2022 seeking forgets state
http://bugs.python.org/issue25863  opened by martin.panter

#25864: collections.abc.Mapping should include a __reversed__ that rai
http://bugs.python.org/issue25864  opened by abarnert

#25865: 7.2 Assignment statements documentation is vague and slightly 
http://bugs.python.org/issue25865  opened by abarnert

#25866: Reference 3. Data Model: miscellaneous minor cleanups on the w
http://bugs.python.org/issue25866  opened by abarnert

#25867: os.stat raises exception when using unicode and no locale is s
http://bugs.python.org/issue25867  opened by sejvlond

#25868: test_eintr.test_sigwaitinfo() hangs on "AMD64 FreeBSD CURRENT 
http://bugs.python.org/issue25868  opened by haypo

#25869: Faster ElementTree deepcopying
http://bugs.python.org/issue25869  opened by serhiy.storchaka

#25872: multithreading traceback KeyError when modifying file
http://bugs.python.org/issue25872  opened by Michael Allen

#25873: Faster ElementTree iterating
http://bugs.python.org/issue25873  opened by serhiy.storchaka

#25874: Add notice that XP is not supported on Python 3.5+
http://bugs.python.org/issue25874  opened by crwilcox

#25876: test_gdb: use subprocess._args_from_interpreter_flags() to tes
http://bugs.python.org/issue25876  opened by haypo

#25878: CPython on Windows builds with /W3, not /W4
http://bugs.python.org/issue25878  opened by Alexander Riccio

#25880: u'..'.encode('idna') → UnicodeError: label empty or too long
http://bugs.python.org/issue25880  opened by spaceone

#25881: A little faster ElementTree serializing
http://bugs.python.org/issue25881  opened by serhiy.storchaka

#25882: argparse help error: arguments created by add_mutually_exclusi
http://bugs.python.org/issue25882  opened by balage

#25883: python 2.7.11 mod_wsgi regression on windows
http://bugs.python.org/issue25883  opened by stephan

#25884: inspect.getmro() fails when base class lacks __bases__ attribu
http://bugs.python.org/issue25884  opened by billyziege

#25887: awaiting on coroutine more than once should be an error
http://bugs.python.org/issue25887  opened by yselivanov

#25888: awaiting on coroutine that is being awaited should be an error
http://bugs.python.org/issue25888  opened by yselivanov

#25894: unittest subTest failure causes result to be omitted from list
http://bugs.python.org/issue25894  opened by zach.ware

#25895: urllib.parse.urljoin does not handle WebSocket URLs
http://bugs.python.org/issue25895  opened by imrehg

#25896: array.array accepting byte-order codes in format strings
http://bugs.python.org/issue25896  opened by Zoinkity..

#25898: Check for subsequence ins

Re: [Python-Dev] Typo in PEP-0423

2015-12-18 Thread Tim Legrand
Well, this looks like a rhetorical question :)

As I am totally new to Python packaging and publication, I had no precise
idea of what I should get from this link.
So my guess would be https://docs.python.org/2/distutils/packageindex.html
(since I was expecting Python 2.7 resources, not 3.x, but I didn't mention
that before).

Let me know if you want/I am allowed to fix the link in the original page
. I have no idea how to
contribute to these repo too :)

Thanks,
Tim

2015-12-18 17:41 GMT+01:00 Guido van Rossum :

> Which of the top links of this query do you think it should be?
>
>
> https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8
>
> On Fri, Dec 18, 2015 at 3:51 AM, Tim Legrand 
> wrote:
>
>> Hi guys,
>>
>> It's said on the Python repos page  that this
>> mailing list is the official maintainer of the peps repo
>> , so here I am writing my question.
>>
>> There's is a typo in the PEP-0423 description, in which it is said:
>>
>> "See Registering with the Package Index
>>  [27] for
>> details."
>>
>> but the provided link is broken (error 404).
>>
>> In the source file
>>  written
>> by Guido van Rossum, the link's placeholder is "Registering with the
>> Package Index".
>>
>> What is the right link ?
>>
>> Thanks,
>> Tim
>>
>> ___
>> Python-Dev mailing list
>> [email protected]
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>>
>
>
> --
> --Guido van Rossum (python.org/~guido)
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Typo in PEP-0423

2015-12-18 Thread Guido van Rossum
On Fri, Dec 18, 2015 at 9:34 AM, Tim Legrand 
wrote:

> Well, this looks like a rhetorical question :)
>

It wasn't, I was hoping you'd be quicker at picking one than me (I don't
publish packages on PyPI much myself so the docs all look like Greek to me
:-).


> As I am totally new to Python packaging and publication, I had no precise
> idea of what I should get from this link.
>

Ah, so it was Greek to you too. :-)


> So my guess would be https://docs.python.org/2/distutils/packageindex.html
> (since I was expecting Python 2.7 resources, not 3.x, but I didn't mention
> that before).
>

Hm, but we are really trying to nudge people towards Python 3.


> Let me know if you want/I am allowed to fix the link in the original page
> . I have no idea how to
> contribute to these repo too :)
>

This particular repo is managed by the "PEP editors":
https://www.python.org/dev/peps/pep-0001/#id29

In this case I've just pushed the fix. Thanks for reporting it!

--Guido


> Thanks,
> Tim
>
> 2015-12-18 17:41 GMT+01:00 Guido van Rossum :
>
>> Which of the top links of this query do you think it should be?
>>
>>
>> https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8
>>
>> On Fri, Dec 18, 2015 at 3:51 AM, Tim Legrand 
>> wrote:
>>
>>> Hi guys,
>>>
>>> It's said on the Python repos page  that this
>>> mailing list is the official maintainer of the peps repo
>>> , so here I am writing my question.
>>>
>>> There's is a typo in the PEP-0423 description, in which it is said:
>>>
>>> "See Registering with the Package Index
>>>  [27] for
>>> details."
>>>
>>> but the provided link is broken (error 404).
>>>
>>> In the source file
>>>  written
>>> by Guido van Rossum, the link's placeholder is "Registering with the
>>> Package Index".
>>>
>>> What is the right link ?
>>>
>>> Thanks,
>>> Tim
>>>
>>> ___
>>> Python-Dev mailing list
>>> [email protected]
>>> https://mail.python.org/mailman/listinfo/python-dev
>>> Unsubscribe:
>>> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>>
>>>
>>
>>
>> --
>> --Guido van Rossum (python.org/~guido)
>>
>
>


-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Szieberth Ádám
Thanks for your reply Guido!

> - Instead of calling signal.signal() yourself, you should use
> loop.add_signal_handler(). It makes sure your signal handler doesn't run
> while another handler is already running.

I was opted to the signal module because `signal` documentation suggest that 
it alos supports Windows while asyncio documentation states that `loop.
add_signal_handler()` is UNIX only.

> - I'm unclear on why you want a wait_forever() instead of using
> loop.run_forever(). Can you clarify?

As I see `loop.run_forever()` is an issue from _outside_ while an `await 
wait_forever()` would be an _inside_ declaration making explicit what the task 
does (serving forever).

My OP suggest that it seemed to me quite helpful inside async context. 
However, I wanted to share my approach to get a confirmation that I am not on 
a totally wrong way with this.

> - In theory, instead of waiting for a Future that is cancelled by a
> handler, you should be able to use asyncio.sleep() with a very large number
> (e.g. a million seconds). 

I was thinking on this too but it seemed less explicit to me than awaiting a 
pure Future with a short comment. Moreover, even millions of seconds can pass.

> Your handler could then just call loop.stop().

For some reason I don't like bothering with the event loop from inside 
awaitables. It seems hacky to me since it breaks the hierarhy of who controlls 
who.

> However, I just tested this and it raises "RuntimeError: Event loop stopped
> before Future completed." so ignore this until we've fixed it. :-)

This is the exception I saw so many times by trying to close an asyncio 
program! I guess I am not the only one. This may be one of the most 
frustrating aspects of the library. Yet, it inspired me to figure out a plain 
pattern to avoid it, which may not the right one. However, I would like to 
signal that it would be nice to help developers with useful patterns and 
documentation to avoid RuntimeErrors and the frustration that goes with them.

Ádám
(http://szieberthadam.github.io/)

PS: I will replay to others as well, but first I had to play with my son. :)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Guido van Rossum
On Fri, Dec 18, 2015 at 10:25 AM, Szieberth Ádám 
wrote:

> Thanks for your reply Guido!
>
> > - Instead of calling signal.signal() yourself, you should use
> > loop.add_signal_handler(). It makes sure your signal handler doesn't run
> > while another handler is already running.
>
> I was opted to the signal module because `signal` documentation suggest
> that
> it alos supports Windows while asyncio documentation states that `loop.
> add_signal_handler()` is UNIX only.
>

Unfortunately that's true, but using the signal module with asyncio the way
you did is *not* safe. The only safe way is to use the
loop.add_signal_handler() interface.


> > - I'm unclear on why you want a wait_forever() instead of using
> > loop.run_forever(). Can you clarify?
>
> As I see `loop.run_forever()` is an issue from _outside_ while an `await
> wait_forever()` would be an _inside_ declaration making explicit what the
> task
> does (serving forever).
>
> My OP suggest that it seemed to me quite helpful inside async context.
> However, I wanted to share my approach to get a confirmation that I am not
> on
> a totally wrong way with this.
>

Well, if you look at the toy servers in the asyncio examples directory,
they all use run_forever(). I agree that from within the loop that's not
possible, but I don't think it's such a common thing (you typically write a
framework for creating servers once and that's the only place where you
would need this). IOW I think your solution of waiting for a Future is the
right way.


> > - In theory, instead of waiting for a Future that is cancelled by a
> > handler, you should be able to use asyncio.sleep() with a very large
> number
> > (e.g. a million seconds).
>
> I was thinking on this too but it seemed less explicit to me than awaiting
> a
> pure Future with a short comment. Moreover, even millions of seconds can
> pass.
>

11 years. That's quite some trust you put in your hardware... But you can
use a billion. I think by 11000 years from now you can retire your server.
:-)


> > Your handler could then just call loop.stop().
>
> For some reason I don't like bothering with the event loop from inside
> awaitables. It seems hacky to me since it breaks the hierarhy of who
> controlls
> who.
>

Fair enough -- you've actually internalized the asyncio philosophy quite
well.


> > However, I just tested this and it raises "RuntimeError: Event loop
> stopped
> > before Future completed." so ignore this until we've fixed it. :-)
>
> This is the exception I saw so many times by trying to close an asyncio
> program! I guess I am not the only one. This may be one of the most
> frustrating aspects of the library. Yet, it inspired me to figure out a
> plain
> pattern to avoid it, which may not the right one. However, I would like to
> signal that it would be nice to help developers with useful patterns and
> documentation to avoid RuntimeErrors and the frustration that goes with
> them.
>

Maybe you can help by submitting a patch that prevents this error! Are you
interested?


> Ádám
> (http://szieberthadam.github.io/)
>
> PS: I will replay to others as well, but first I had to play with my son.
> :)
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Idea: Dictionary references

2015-12-18 Thread Andrew Barnert via Python-Dev
> On Dec 18, 2015, at 04:56, Steven D'Aprano  wrote:
> 
>>> On Thu, Dec 17, 2015 at 09:30:24AM -0800, Andrew Barnert via Python-Dev 
>>> wrote:
>>> On Dec 17, 2015, at 07:38, Franklin? Lee  
>>> wrote:
>>> 
>>> The nested dictionaries are only for nested scopes (and inner
>>> functions don't create nested scopes). Nested scopes will already
>>> require multiple lookups in parents.
>> 
>> I think I understand what you're getting at here, but it's a really 
>> confusing use of terminology. In Python, and in programming in 
>> general, nested scopes refer to exactly inner functions (and classes) 
>> being lexically nested and doing lookup through outer scopes. The fact 
>> that this is optimized at compile time to FAST vs. CELL vs. 
>> GLOBAL/NAME, cells are optimized at function-creation time, and only 
>> global and name have to be resolved at the last second doesn't mean 
>> that there's no scoping, or some other form of scoping besides 
>> lexical. The actual semantics are LEGB, even if L vs. E vs. GB and E 
>> vs. further-out E can be optimized.
> 
> In Python 2, the LOAD_NAME byte-code can return a local, even though it 
> normally doesn't:
> 
> py> x = "global"
> py> def spam():
> ... exec "x = 'local'"
> ... print x
> ...
> py> spam()
> local
> py> x == 'global'
> True
> 
> 
> If we look at the byte-code, we see that the lookup is *not* optimized 
> to inspect locals only (LOAD_FAST), but uses the regular LOAD_NAME that 
> normally gets used for globals and builtins:
> 
> py> import dis
> py> dis.dis(spam)
>  2   0 LOAD_CONST   1 ("x = 'local'")
>  3 LOAD_CONST   0 (None)
>  6 DUP_TOP
>  7 EXEC_STMT
> 
>  3   8 LOAD_NAME0 (x)
> 11 PRINT_ITEM
> 12 PRINT_NEWLINE
> 13 LOAD_CONST   0 (None)
> 16 RETURN_VALUE
> 
> 
> 
>> What you're talking about here is global lookups falling back to 
>> builtin lookups. There's no more general notion of nesting or scoping 
>> involved, so why use those words?
> 
> I'm not quite sure about this. In principle, every name lookup looks in 
> four scopes, LEGB as you describe above:
> 
> - locals
> - non-locals, a.k.a. enclosing or lexical scope(s)
> - globals (i.e. the module)
> - builtins
> 
> 
> although Python can (usually?) optimise away some of those lookups.

I think it kind of _has_ to optimize away, or at least tweak, some of those 
things, rather than just acting as if globals and builtins were just two more 
enclosing scopes. For example, global to builtins has to go through 
globals()['__builtins__'], or act as if it does, or code that relies on, say, 
the documented behavior of exec can be broken. And you have to be able to 
modify the global scope after compile time and have that modification be 
effective, which means you'd have to allow the same things on locals and 
closures if they were to act the same.

> The 
> relationship of locals to enclosing scopes, and to globals in turn, 
> involve actual nesting of indented blocks in Python, but that's not 
> necessarily the case. One might imagine a hypothetical capability for 
> manipulating scopes directly, e.g.:
> 
> def spam(): ...
> def ham(): ...
> set_enclosing(ham, spam)
> # like:
> # def spam():
> # def ham(): ...

But that doesn't work; a closure has to link to a particular invocation of its 
outer function, not just to the function. Consider a trivial example:

def spam(): x=time()
def ham(): return x
set_enclosing(ham, spam)
ham()

There's no actual x value in scope. So you need something like this if you want 
to actually be able to call it:

def spam(helper):
x=time()
helper = bind_closure(helper, sys._getframe())
return helper()
def ham(): return x
set_enclosing(ham, spam)
spam(ham)

Of course you could make that getframe implicit; the point is there has to be a 
frame from an invocation of spam, not just the function itself, to make lexical 
scoping (errr... dynamically-generated fake-lexical scoping?) useful.

> The adventurous or fool-hardy can probably do something like that now 
> with byte-code hacking :-)

Yeah; I actually played with something like this a few years ago. I did it 
directly in terms of creating cell and free vars, not circumventing the 
existing LEGB system, which means you have to modify not just ham, but spam, in 
that set_enclosing. (In fact, you also have to modify all functions lexically 
or faux-lexically enclosing spam or enclosed by ham, which my code didn't do, 
but there were lots of other ways to fake it...). You need a bit of 
ctypes.pythonapi, not just bytecode hacks, to do the bind_closure() hack (the 
cell constructor isn't callable from Python, and you can't even fake it with a 
wrapper around a cell because cell_contents is immutable from Python...), but 
it's all doable. Anyway, my original goal was to make it possible to get the 
effect of nonlocal in Python 2, by calling "set_enclosin

Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Glenn Linderman

On 12/18/2015 10:36 AM, Guido van Rossum wrote:


I was opted to the signal module because `signal` documentation
suggest that
it alos supports Windows while asyncio documentation states that
`loop.
add_signal_handler()` is UNIX only.


Unfortunately that's true, but using the signal module with asyncio 
the way you did is *not* safe. The only safe way is to use the 
loop.add_signal_handler() interface.


Does this mean Windows users should not bother trying to use asyncio ?

(I haven't yet, due to lack of time, but I'd hate to think of folks, 
including myself in the future, investing a lot of time developing 
something and then discover it can never be reliable, due to this sort 
of "unsafe" or "not-available-on-Windows" feature.)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Guido van Rossum
No, it just means Windows users should not try to catch signals on Windows.

Signals don't really exist there, and the simulation supporting only a few
signals is awful (last I tried ^C was only processed when the process was
waiting for input from stdin, and I had to use the BREAK key to stop
runaway processes, which killed my shell window as well as the Python
process).

If you want orderly shutdown of a server process on Windows, you should
probably listen for connections on a dedicated port on localhost and use
that as an indication to stop the server.

On Fri, Dec 18, 2015 at 11:29 AM, Glenn Linderman 
wrote:

> On 12/18/2015 10:36 AM, Guido van Rossum wrote:
>
> I was opted to the signal module because `signal` documentation suggest
>> that
>> it alos supports Windows while asyncio documentation states that `loop.
>> add_signal_handler()` is UNIX only.
>>
>
> Unfortunately that's true, but using the signal module with asyncio the
> way you did is *not* safe. The only safe way is to use the
> loop.add_signal_handler() interface.
>
>
> Does this mean Windows users should not bother trying to use asyncio ?
>
> (I haven't yet, due to lack of time, but I'd hate to think of folks,
> including myself in the future, investing a lot of time developing
> something and then discover it can never be reliable, due to this sort of
> "unsafe" or "not-available-on-Windows" feature.)
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>
>


-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Andrew Barnert via Python-Dev
On Dec 18, 2015, at 10:25, Szieberth Ádám  wrote:
> 
>> - In theory, instead of waiting for a Future that is cancelled by a
>> handler, you should be able to use asyncio.sleep() with a very large number
>> (e.g. a million seconds).
> 
> I was thinking on this too but it seemed less explicit to me than awaiting a 
> pure Future with a short comment. Moreover, even millions of seconds can pass.

Yes, and these are really fun to debug. When a customer comes to you with "it 
was running fine for a few months and then suddenly it started going crazy, but 
I can't reproduce it", unless you happen to remember that you decided 10 
million seconds was "forever" and ask whether "a few months" specifically means 
a few days short of 4 months... (At least with 24 and 49 days I know to look 
for which library used a C integer for milliseconds.)

Really, I don't see anything wrong with the way the OP wrote it. Is that just 
because I have bad C habits (/* Useless select because there's no actual sleep 
function that allows SIGUSR to wake us without allowing all signals to wake us 
that works on both Solaris and IRIX */) and it really does look misleading to 
people who aren't warped like that?

If so, would it be worth having an actual way to say "sleep forever (until 
canceled)"? Even if, under the covers, this only sleeps for 5 years or so, 
a Y52K problem that can be solved by just pushing a new patch release for 
Python instead of for every separate server written in Python is probably a bit 
nicer. :)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Andrew Barnert via Python-Dev
On Dec 18, 2015, at 10:36, Guido van Rossum  wrote:
> 
>> On Fri, Dec 18, 2015 at 10:25 AM, Szieberth Ádám  wrote:
>> Thanks for your reply Guido!
>> 
>> > - In theory, instead of waiting for a Future that is cancelled by a
>> > handler, you should be able to use asyncio.sleep() with a very large number
>> > (e.g. a million seconds).
>> 
>> I was thinking on this too but it seemed less explicit to me than awaiting a
>> pure Future with a short comment. Moreover, even millions of seconds can 
>> pass.
> 
> 11 years.

It's 11 days. Which is pretty reasonable server uptime. And probably just 
outside the longest test you're ever going to run. I don't trust myself to pick 
"a big number" when the numbers get this big. But I still sometimes sneak one 
past myself somehow. Hence my suggestion for a way to actually say "forever".

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Guido van Rossum
On Fri, Dec 18, 2015 at 12:45 PM, Andrew Barnert  wrote:

> On Dec 18, 2015, at 10:36, Guido van Rossum  wrote:
>
> On Fri, Dec 18, 2015 at 10:25 AM, Szieberth Ádám 
> wrote:
>
>> Thanks for your reply Guido!
>>
>> > - In theory, instead of waiting for a Future that is cancelled by a
>> > handler, you should be able to use asyncio.sleep() with a very large
>> number
>> > (e.g. a million seconds).
>>
>> I was thinking on this too but it seemed less explicit to me than
>> awaiting a
>> pure Future with a short comment. Moreover, even millions of seconds can
>> pass.
>>
>
> 11 years.
>
>
> It's 11 days. Which is pretty reasonable server uptime.
>

Oops, blame the repr() of datetime.timedelta. I'm sorry I so rashly thought
I could do better than the OP.


> And probably just outside the longest test you're ever going to run. I
> don't trust myself to pick "a big number" when the numbers get this big.
> But I still sometimes sneak one past myself somehow. Hence my suggestion
> for a way to actually say "forever".
>

I guess we could make the default arg to sleep() 1e9. Or make it None and
special-case it. I don't feel strongly about this -- I'm not sure how
baffling it would be to accidentally leave out the delay and find your code
sleeps forever rather than raising an error (since if you don't expect the
infinite default you may not expect this kind of behavior). But I do feel
it's not important enough to add a new function or method.

However, I don't think "forever" and "until cancelled" are really the same
thing. "Forever" can only be interrupted by loop.stop(); "until cancelled"
requires indicating how to cancel it, and there the OP's approach is about
the best you can do. (Or you could use the Event class, but that's really
just a wrapper on top of a Future made to look more like threading.Event in
its API.)

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Szieberth Ádám
Thanks for your reply Andrew!

> Personally I don't feel the need for `wait_forever()` or
> `loop.creae_context_task()`.
> 
> But even if you need it you may create it from scratch easy, isn't it?

Indeed. I was prepared for such opinions which is OK. It is better to think it 
through several times twice before introducing a new feature to an API.

I myself feel that `loop.create_context_task()` may be too specific. The 
`asyncio.wait_forever()` coro seems much simple. Surely it must get 
investigated whether there are a significal amount of patterns where this coro 
could take part. I introduced one but surely that is not enough, only if it is 
so awesome that everyone starts using it which I doubt. :)

Ádám
(http://szieberthadam.github.io/)

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Szieberth Ádám
> Maybe you can help by submitting a patch that prevents this error! Are you
> interested?

I'd be honored.

Ádám
(http://szieberthadam.github.io/)

P.S.: Was thinking about a longer answer but finally I ended up with this one :)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Szieberth Ádám
> I guess we could make the default arg to sleep() 1e9. Or make it None and
> special-case it.

By writing the OP, I considered suggesting this approach and rejected. I would 
have suggest the  using Ellipsis (`...`) for the special case which seemed to 
explain more what is done plus it can hardly given unintentionally. I ended up 
suggesting `wait_forever()` though.

Ádám
(http://szieberthadam.github.io/)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Andrew Barnert via Python-Dev
On Friday, December 18, 2015 1:09 PM, Guido van Rossum  wrote:


>I guess we could make the default arg to sleep() 1e9. Or make it None and 
>special-case it. I don't feel strongly about this -- I'm not sure how baffling 
>it would be to accidentally leave out the delay and find your code sleeps 
>forever rather than raising an error (since if you don't expect the infinite 
>default you may not expect this kind of behavior).

Yeah, that is a potential problem.

The traditional C solution is to just allow passing -1 to mean "forever",* 
ideally with a constant so you can just say "sleep(FOREVER)". Which, in Python 
terms, would presumably mean "asyncio.sleep(asyncio.forever)", and it could be 
a unique object or an enum value or something instead of actually being -1.

* Or at least "until this rolls over 31/32/63/64 bits", which is where you get 
those 49-day bugs from... but that wouldn't be an issue in Python

> But I do feel it's not important enough to add a new function or method.

Definitely agreed.
>However, I don't think "forever" and "until cancelled" are really the same 
>thing. "Forever" can only be interrupted by loop.stop(); "until cancelled" 
>requires indicating how to cancel it, and there the OP's approach is about the 
>best you can do. (Or you could use the Event class, but that's really just a 
>wrapper on top of a Future made to look more like threading.Event in its API.)


OK, I thought the OP's code looked pretty clear as written: he wants to wait 
until cancelled, so he waits on something that pretty clearly won't ever finish 
until he's cancelled. If that (or an Event or whatever) is the best way to 
spell this, then I can't really think of any good uses for sleep(forever).
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Asynchronous context manager in a typical network server

2015-12-18 Thread Guido van Rossum
Using an Event is slightly better because you just wait for it -- you don't
have to catch an exception. It's just not one of the better-known parts of
asyncio.

On Fri, Dec 18, 2015 at 1:42 PM, Andrew Barnert  wrote:

> On Friday, December 18, 2015 1:09 PM, Guido van Rossum 
> wrote:
>
>
> >I guess we could make the default arg to sleep() 1e9. Or make it None and
> special-case it. I don't feel strongly about this -- I'm not sure how
> baffling it would be to accidentally leave out the delay and find your code
> sleeps forever rather than raising an error (since if you don't expect the
> infinite default you may not expect this kind of behavior).
>
> Yeah, that is a potential problem.
>
> The traditional C solution is to just allow passing -1 to mean "forever",*
> ideally with a constant so you can just say "sleep(FOREVER)". Which, in
> Python terms, would presumably mean "asyncio.sleep(asyncio.forever)", and
> it could be a unique object or an enum value or something instead of
> actually being -1.
>
> * Or at least "until this rolls over 31/32/63/64 bits", which is where you
> get those 49-day bugs from... but that wouldn't be an issue in Python
>
> > But I do feel it's not important enough to add a new function or method.
>
> Definitely agreed.
> >However, I don't think "forever" and "until cancelled" are really the
> same thing. "Forever" can only be interrupted by loop.stop(); "until
> cancelled" requires indicating how to cancel it, and there the OP's
> approach is about the best you can do. (Or you could use the Event class,
> but that's really just a wrapper on top of a Future made to look more like
> threading.Event in its API.)
>
>
> OK, I thought the OP's code looked pretty clear as written: he wants to
> wait until cancelled, so he waits on something that pretty clearly won't
> ever finish until he's cancelled. If that (or an Event or whatever) is the
> best way to spell this, then I can't really think of any good uses for
> sleep(forever).
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Idea: Dictionary references

2015-12-18 Thread Franklin? Lee
On Fri, Dec 18, 2015 at 2:32 PM, Andrew Barnert via Python-Dev
 wrote:

> (Also, either way, it seems more like a thread for -ideas than -dev...)

I said this early on in this thread!

Should I try to write up my idea as a single thing, instead of a bunch
of responses, and post it in -ideas?

Should I call them "parent scope" and "parent refcell"?


On Fri, Dec 18, 2015 at 7:56 AM, Steven D'Aprano  wrote:

> I'm not quite sure about this. In principle, every name lookup looks in
> four scopes, LEGB as you describe above:
>
> - locals
> - non-locals, a.k.a. enclosing or lexical scope(s)
> - globals (i.e. the module)
> - builtins
>
>
> although Python can (usually?) optimise away some of those lookups. The
> relationship of locals to enclosing scopes, and to globals in turn,
> involve actual nesting of indented blocks in Python, but that's not
> necessarily the case.

As I understand, L vs E vs GB is known at compile-time.

That is, your exec example doesn't work for me in Python 3, because
all names are scoped at compile-time.

x = 5
def f():
exec('x = 111')
print(x)

f() #prints 5
print(x) #prints 5


This means that my idea only really works for GB lookups.

> On Thu, Dec 17, 2015 at 09:30:24AM -0800, Andrew Barnert via Python-Dev wrote:

>> So, trying to generalize global vs. builtin to a general notion of
>> "nested scope" that isn't necessary for builtins and doesn't work for
>> anything else seems like overcomplicating things for no benefit.
>
> Well, putting aside the question of whether this is useful or not, and
> putting aside efficiency concerns, let's just imagine a hypothetical
> implementation where name lookups used ChainMaps instead of using
> separate LOAD_* lookups of special dicts. Then a function could set up a
> ChainMap:
>
> function.__scopes__ = ChainMap(locals, enclosing, globals, builtins)
>
> and a name lookup for (say) "x" would always be a simple:
>
> function.__scopes__["x"]
>
> Of course this would be harder to optimize, and hence probably slower,
> than the current arrangement,

This is where the ChainRefCell idea comes in.

If a ChainRefCell is empty, it would ask its parent dicts for a value.
If it finds a value in parent n, it would replace parent n with a
refcell into parent n, and similarly for parents 0, 1, ... n-1. It
won't need to do hash lookups in those parents again, while allowing
for those parents to acquire names. (This means parent n+1 won't need
to create refcells, so we don't make unnecessary refcells in `object`
and `__builtin__`.)

Unfortunately, classes are more complicated than nested scopes.

1. We skip MRO if we define classes as having their direct supers as
parents. (Solution: Define classes as having all supers as parents,
and make non-recursive Refcell.resolve() requests.) (Objects have
their class as a parent, always.)

2. Classes can replace their bases. (I have some ideas for this, but see #3.)

3. I get the impression that attribute lookups are already pretty optimized.


On Fri, Dec 18, 2015 at 2:32 PM, Andrew Barnert via Python-Dev
 wrote:

> I think it kind of _has_ to optimize away, or at least tweak, some of those 
> things, rather than just acting as if globals and builtins were just two more 
> enclosing scopes. For example, global to builtins has to go through 
> globals()['__builtins__'], or act as if it does, or code that relies on, say, 
> the documented behavior of exec can be broken.

It would or could, in my idea of __builtins__ being a parent scope of
globals() (though I'm not sure whether it'd be the case for any other
kind of nesting).

Each refcell in globals() will hold a reference to __builtins__ (if
they didn't successfully look it up yet) or to a refcell in
__builtins__ (if there was once a successful lookup). Since globals()
knows when globals()['__builtins__'] is modified, it can invalidate
all its refcells' parent cells (by making them hold references to the
new __builtins__).

This will be O(len(table) + (# of refcells)), but swapping out
__builtins__ shouldn't be something you keep doing. Even if it is a
concern, I have More Ideas to remove the "len(table) +" (but with
Raymond Hettinger's compact dicts, it wouldn't be necessary). It would
be worse for classes, because it would require potentially many
notifications. (But it would also save future lookups. And I have More
Ideas.)

This idea (of the owner dict "knowing" about its changed parent) also
applies to general chained scopes, but flattenings like MRO would mess
it up. Again, though, More Ideas. And more importantly, from what I
understand of Victor's response, the current implementation would
probably be efficient enough, or more efficient.


> And you have to be able to modify the global scope after compile time and 
> have that modification be effective, which means you'd have to allow the same 
> things on locals and closures if they were to act the same.

Not sure what you mean, but since I demand (possibly empty) refcells
from