Re: [Python-Dev] Status of PEP 3145 - Asynchronous I/O for subprocess.popen

2014-03-30 Thread Josiah Carlson
I've got a patch with partial tests and documentation that I'm holding off
on upload because I believe there should be a brief discussion.

Long story short, Windows needs a thread to handle writing in a
non-blocking fashion, regardless of the use of asyncio or plain subprocess.

If you'd like to continue following this issue and participate in the
discussion, I'll see you over on http://bugs.python.org/issue1191964 .

 - Josiah



On Fri, Mar 28, 2014 at 11:35 AM, Josiah Carlson
wrote:

>
> On Fri, Mar 28, 2014 at 10:46 AM, Guido van Rossum wrote:
>
>> On Fri, Mar 28, 2014 at 9:45 AM, Josiah Carlson > > wrote:
>>
>>>
>>> If it makes you feel any better, I spent an hour this morning building a
>>> 2-function API for Linux and Windows, both tested, not using ctypes, and
>>> not even using any part of asyncio (the Windows bits are in msvcrt and
>>> _winapi). It works in Python 3.3+. You can see it here:
>>> http://pastebin.com/0LpyQtU5
>>>
>>
>> Seeing this makes *me* feel better. I think it's reasonable to add (some
>> variant) of that to the subprocess module in Python 3.5. It also belongs in
>> the Activestate cookbook. And no, the asyncio module hasn't made it
>> obsolete.
>>
>
> Cool.
>
>  Josiah, you sound upset about the whole thing -- to the point of writing
>> unintelligible sentences and passive-aggressive digs at everyone reading
>> this list. I'm sorry that something happened that led you feel that way (if
>> you indeed feel upset or frustrated) but I'm glad that you wrote that code
>> snippet -- it is completely clear what you want and why you want it, and
>> also what should happen next (a few rounds of code review on the tracker).
>>
>
> I'm not always a prat. Something about python-dev brings out parts of me
> that I thought I had discarded from my personality years ago. Toss in a bit
> of needing to re-explain ideas that I've been trying to explain for almost
> 9 years? Frustration + formerly discarded personality traits = uck. That's
> pretty much why I won't be rejoining the party here regularly, you are all
> better off without me commenting on 95% of threads like I used to.
>
> Victor, I'm sorry for being a jerk. It's hard for me to not be the guy I
> was when I spend time on this list. That's *my* issue, not yours. That I
> spent any time redirecting my frustration towards you is BS, and if I could
> take back the email I sent just before getting Guido's, I would.
>
> I would advise everyone to write it off as the ramblings of a surprisingly
> young, angry old man. Or call me an a-hole. Both are pretty accurate. :)
>
> But that PEP? It's just a terrible PEP. It doesn't contain a single line
>> of example code. It doesn't specify the proposed interface, it just
>> describes in way too many sentences how it should work, and contains a
>> whole lot of references to various rants from which the reader is
>> apparently meant to become enlightened. I don't know which of the three
>> authors *really* wrote it, and I don't want to know -- I think the PEP is
>> irrelevant to the proposed feature, which is of "put it in the bug tracker
>> and work from there" category -- presumably the PEP was written based on
>> the misunderstanding that having a PEP would make acceptance of the patch
>> easier, or because during an earlier bikeshedding round someone said
>> "please write a PEP" (someone always says that). I propose to scrap the PEP
>> (set the status to Withdrawn) and just work on adding the methods to the
>> subprocess module.
>>
>
> I'm not going to argue. The first I read it was 2-3 days ago.
>
>  If it were me, I'd define three methods, with longer names to clarify
>> what they do, e.g.
>>
>> proc.write_nonblocking(data)
>> data = proc.read_nonblocking()
>> data = proc.read_stderr_nonblocking()
>>
>
> Easily doable.
>
> I.e. add _nonblocking to the method names to clarify that they may return
>> '' when there's nothing available, and have a separate method for reading
>> stderr instead of a flag. And I'd wonder if there should be an unambiguous
>> way to detect EOF or whether the caller should just check for
>> proc.stdout.closed. (And what for stdin? IIRC it actually becomes writable
>> when the other end is closed, and then the write() will fail. But maybe I
>> forget.)
>>
>> But that's all bikeshedding and it can happen on the tracker or directly
>> on the list just as easily; I don't see the need for a PEP.
>>
>
> Sounds good.
>
>  - Josiah
>
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] collections.sortedtree

2014-03-30 Thread Marko Rauhamaa
Guido van Rossum :

> Yeah, so the pyftp fix is to keep track of how many timers were
> cancelled, and if the number exceeds a threshold it just recreates the
> heap, something like
>
> heap = [x for x in heap if not x.cancelled]
> heapify(heap)

I measured my target use case with a simple emulation on my linux PC.

The simple test case emulates this scenario:

Start N connections at frequency F and have each connection start a
timer T. Then, rotate over the connections at the same frequency F
restarting timer T. Stop after a duration that is much greater than
T.

Four different timer implementations were considered:

   HEAPQ: straight heapq

   HEAPQ*: heapq with the pyftp fix (reheapify whenever 80% of the
   outstanding timers have been canceled)

   SDICT: sorteddict (my C implementation)

   PyAVL: Python AVL tree (my implementation)


Here are the results:

N = 1000, F = 100 Hz, T = 10 min, duration 1 hr

=
Virt Res  max len()   urs   sys   CPU
 MB   MB   s s %
=
HEAPQ22   166000121.5   4.3   0.7
HEAPQ*   117 500018.4   4.2   0.6
SDICT116 100018.2   3.9   0.6
PyAVL116 100039.3   3.6   1.2
=


N = 1, F = 1000 Hz, T = 10 min, duration 1 hr

=
Virt Res  max len()   urs   sys   CPU
 MB   MB   s s %
=
HEAPQ   125  120   600044   223.0  25.8   6.9
HEAPQ*   21   165   186.8  30.0   6.0
SDICT15   101   196.6  25.7   6.2
PyAVL16   111   412.5  22.3  12.1
=


Conclusions:

 * The CPU load is almost identical in HEAPQ, HEAPQ* and SDICT.

 * HEAPQ* is better than HEAPQ because of the memory burden.

 * PyAVL is not all that bad compared with SDICT.


Marko
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] freeze build slave

2014-03-30 Thread Martin v. Löwis
I have created a buildbot configuration to test freeze. At the moment,
it has only one builder:

http://buildbot.python.org/all/waterfall?show=AMD64%20Ubuntu%20LTS%20Freeze%203.x

which currently fails as freeze doesn't actually work.

The test itself works by first building Python in release mode,
then installing it, then running freeze on a test program, then
building the test programm (and ultimately running it).

The question then is: how should that integrate with the rest
of the builders? I can see three alternatives:
A. (status quo) run the test on a selected subset of the Unix
   builders
B. run the test on all Unix builders.
C. integrate the test with the Unix regular Unix builder

Evaluating these alternatives:
B: pro: wider testing
   con: each such build takes the slave lock, so slaves
   will have to do one additional full build per commit
   (or two if the fix gets checked into 3.4 as well).
   In addition, each slave will need disk space for one
   additional build tree plus one Python installation,
   per branch.
C: pro: compared to B, build time is reduced (need only
   to build once per branch); disk space is also reduced
   con: it would test a debug build, not a release build

Regards,
Martin
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] freeze build slave

2014-03-30 Thread Antoine Pitrou
On Sun, 30 Mar 2014 20:44:02 +0200
"Martin v. Löwis"  wrote:
> I have created a buildbot configuration to test freeze. At the moment,
> it has only one builder:
> 
> http://buildbot.python.org/all/waterfall?show=AMD64%20Ubuntu%20LTS%20Freeze%203.x
> 
> which currently fails as freeze doesn't actually work.
> 
> The test itself works by first building Python in release mode,
> then installing it, then running freeze on a test program, then
> building the test programm (and ultimately running it).
> 
> The question then is: how should that integrate with the rest
> of the builders? I can see three alternatives:
> A. (status quo) run the test on a selected subset of the Unix
>builders
> B. run the test on all Unix builders.
> C. integrate the test with the Unix regular Unix builder
> 
> Evaluating these alternatives:
> B: pro: wider testing
>con: each such build takes the slave lock, so slaves
>will have to do one additional full build per commit
>(or two if the fix gets checked into 3.4 as well).
>In addition, each slave will need disk space for one
>additional build tree plus one Python installation,
>per branch.
> C: pro: compared to B, build time is reduced (need only
>to build once per branch); disk space is also reduced
>con: it would test a debug build, not a release build

We have at least one builder working in release (i.e. non-debug) mode.
http://buildbot.python.org/all/builders/x86%20Gentoo%20Non-Debug%203.x

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] freeze build slave

2014-03-30 Thread Stefan Krah
"Martin v. L?wis"  wrote:
> C: pro: compared to B, build time is reduced (need only
>to build once per branch); disk space is also reduced
>con: it would test a debug build, not a release build

It would be an option to run half of the Unix slaves (especially the ones with
the more aggressive compilers) in release mode. I think that is beneficial 
anyway.


Stefan Krah



___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] freeze build slave

2014-03-30 Thread Victor Stinner
I disagree. Running tests in debug code tests more things thanks to
assertions, and provides more info in case of test failure or crash. Some
assertions only fail on some platforms. See for example test_locale which
fails with an assertion error on solaris (since Python 3.3).

Adding one or two slaves should not hurt. You need maybe at least one on
Windows and another on OS X.

Victor

Le dimanche 30 mars 2014, Stefan Krah  a écrit :

> "Martin v. L?wis" > wrote:
> > C: pro: compared to B, build time is reduced (need only
> >to build once per branch); disk space is also reduced
> >con: it would test a debug build, not a release build
>
> It would be an option to run half of the Unix slaves (especially the ones
> with
> the more aggressive compilers) in release mode. I think that is beneficial
> anyway.
>
>
> Stefan Krah
>
>
>
> ___
> Python-Dev mailing list
> [email protected] 
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com