[Python-Dev] smarter temporary file object (SF #415692)

2007-01-02 Thread dustin
I came across a complaint that PEP 0042 had become a graveyard of
neglected ideas, and decided to have a look through and implement
something.  Creating a smarter temporary file object seemed simple
enough.

Oddly, even after GvR re-opened it, I can't post an attachment to that
tracker item (it's under "Feature Requests" -- does it need to get moved
to "Patches" first?), but the implementation is short, so it's included
below.  This is intended to be appended to Lib/tempfile.py (and thus
assumes that module's globals are present).

I would appreciate it if the gurus of python-dev could take a peek and
let me know if this is unsuitable or incorrect for any reason.  It's not
the most straightforward implementatio -- I used the optimization
techniques I found in TemporaryFile.

If this looks good, I'll prepare a patch against trunk, including an
additional chunk of documentation and a unit test.

Dustin

 -cut-here-

try:
  from cStringIO import StringIO
except:
  from StringIO import StringIO

class SpooledTemporaryFile:
"""Temporary file wrapper, specialized to switch from
StringIO to a real file when it exceeds a certain size or
when a fileno is needed.
"""
_rolled = False

def __init__(self, max_size=0, mode='w+b', bufsize=-1,
 suffix="", prefix=template, dir=None):
self._file = StringIO()
self._max_size = max_size
self._TemporaryFileArgs = (mode, bufsize, suffix, prefix, dir)

def _check(self, file):
if self._rolled: return
if file.tell() > self.__dict__['_max_size']:
self._rollover(file)

def _rollover(self, file):
args = self.__dict__['_TemporaryFileArgs']
self.__dict__.clear() # clear attributes cached by __getattr__
newfile = self._file = TemporaryFile(*args)
newfile.write(file.getvalue())
newfile.seek(file.tell(), 0)
self._rolled = True

# replace patched functions with the new file's methods
self.write = newfile.write
self.writelines = newfile.writelines
self.fileno = newfile.fileno

def write(self, s):
file = self.__dict__['_file']
rv = file.write(s)
self._check(file)
return rv

def writelines(self, iterable):
file = self.__dict__['_file']
rv = file.writelines(iterable)
self._check(file)
return rv

def fileno(self):
self._rollover(self.__dict__['_file'])
return self.fileno()

def __getattr__(self, name):
file = self.__dict__['_file']
a = getattr(file, name)
if type(a) != type(0):
setattr(self, name, a)
return a
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] smarter temporary file object (SF #415692, #1630118)

2007-01-07 Thread dustin
On Tue, Jan 02, 2007 at 10:07:58PM -0800, Neal Norwitz wrote:
> Thanks for your patch!

With some advice from Jim Jewett, the addition of some test cases, and a
paragraph of documentation, I've uploaded the corresponding patch at
  http://python.org/sf/1630118
if there are any other modifications or improvements anyone has in mind,
please let me know.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Shortcut Notation for Chaining Method Calls

2007-02-03 Thread dustin
On Sat, Feb 03, 2007 at 07:01:47PM +, Michael O\'Keefe wrote:
> Anyhow, just curious for ideas and sparking discussion.
...

I haven't been on the list long enough to know, but I would expect that this
idea and its relatives have been batted around at least once before.  I think a
lot of people have been frustrated at the repetitive nature of operations on
lists, for example, as you indicated in your first post.  I think there's room
for debate on whether specific list methods that currently return None should
instead return the list, although I would definitely consult the archives
before entering that fray. 

I expect that the idea of adding a new operator or any other syntactic change
is, like the count of 5, "right out".

For what it's worth, you can wrap an object so it behaves the way you like as
follows, although of course this will discard the return value of any functions
which produce one:

class wrapcall(object):
def __init__(self, inner): 
self.inner = inner 

def __getattr__(self, attr):
rv = getattr(self.inner, attr)
if callable(rv):
def wrap(*args, **kwargs):
rv(*args, **kwargs)
return self
return wrap
else:   
return rv

mylist = [1, 2, 3]
wrapcall(mylist).reverse().append(2).reverse()
assert mylist == [2, 1, 2, 3]

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Trial balloon: microthreads library in stdlib

2007-02-10 Thread dustin
Mostly for my own curiosity, I'm working on a PEP-342-based
microthreading library with a similar api to threads and threading
(coalesced into a single module).  It uses coroutines and a trampoline
scheduler, and provides basic async wrappers for common IO operations.

It's not a framework/environment like Twisted or Kamaelia -- it's just a
microthreading library with some solid primitives.  My thinking is that
this would be the "next level" for apps which currently use asyncore.

I won't go into a lot of detail on the module, because (a) it's not even
nearly done and (b) my question is higher-level than that.

  Is there any interest in including a simple microthreading module in
  Python's standard library?

If this sounds like a terrible idea, let fly the n00b-seeking missiles.
If it sounds better than terrible, I'll keep working and post a
reasonable prototype soon (a PEP would also be in order at some point,
correct?).

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Trial balloon: microthreads library in stdlib

2007-02-10 Thread dustin
On Sat, Feb 10, 2007 at 03:00:28PM -0800, Brett Cannon wrote:
> 1. Write it
> 2. Get the community to use it and like it
> 3. Make it follow PEP 7/8 style guidelines
> 4. Write docs
> 5. Write tests
> 6. Promise to help maintain the code.

Thanks -- I hadn't really planned that far ahead yet.  I expect #2 will
be the hardest!

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Trial balloon: microthreads library in stdlib

2007-02-10 Thread dustin
On Sun, Feb 11, 2007 at 03:35:29AM +0200, Yotam Rubin wrote:
> Why don't you use Stackless? It's very simple, stable, and solves
> quite completely the problems in writing concurrect code.

That's a great point -- I'm not necessarily producing this to solve a
problem I'm having.  Rather, I think that the new language features in
PEP 342 cry out for a batteries-included library that makes asynchronous
programming both natural and easy.  

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Trial balloon: microthreads library in stdlib

2007-02-12 Thread dustin
On Tue, Feb 13, 2007 at 12:33:46PM +1300, Greg Ewing wrote:
> Richard Tew wrote:
> 
> > The ideal mechanism at the high level would be expanding asyncore into
> > a "one-stop shop".  Where all these things can be passed into it and
> > it can do the work to notify of events on the objects in a standard way.
> 
> +1. This sounds like an excellent idea. It's downright
> silly having each thing that uses async I/O doing its
> own thing. There should be a standard mechanism in the
> stdlib that everything can use.

I'm workin' on it! ;)

I guess #2 wasn't so hard, after all, Brett!

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] microthreading vs. async io

2007-02-14 Thread dustin
I've steered clear of this conversation for a while, because it drifted
a little bit off my original topic.  But I did want to straighten a
little bit of terminology out, if only to resolve some of my own
confusion over all the hubbub.  I don't pretend to define the words
others use; these definitions are mine, and apply to what I write below.

cooperative multitasking:
  Dynamically executing multiple tasks in a single thread; comes in
  two varieties:

continuations:
  Breaking tasks into discrete "chunks", and passing references those
  chunks around as a means of scheduling.

microtreading:
  Exploiting language features to use cooperative multitasking in tasks
  that "read" like they are single-threaded.

asynchronous IO:
  Performing IO to/from an application in such a way that the
  application does not wait for any IO operations to complete, but
  rather polls for or is notified of the readiness of any IO operations.

Twisted is, by the above definitions, a continuation-based cooperative
multitasking library that includes extensive support for asynchronous
IO, all the way up the network stack for an impressive array of
protocols.  It does not itself implement microthreading, but Phillip
provided a nice implementation of such on top of Twisted[1].

Asyncore *only* implements asynchronous IO -- any "tasks" performed in
its context are the direct result of an IO operation, so it's hard to
say it implements cooperative multitasking (and Josiah can correct me if
I'm wrong, but I don't think it intends to).

Much of the discussion here has been about creating a single, unified
asynchronous IO mechanism that would support *any* kind of cooperative
multitasking library.  I have opinions on this ($0.02 each, bulk
discounts available), but I'll keep them to myself for now.

Instead, I would like to concentrate on producing a small, clean,
consistent, generator-based microthreading library.  I've seen several
such libraries (including the one in PEP 342, which is fairly skeletal),
and they all work *almost* the same way, but differ in, for example, the
kinds of values that can be yielded, their handling of nested calls, and
the names for the various "special" values one can yield.  

That similar mouldes are being written repeatedly, and presumably
applications and frameworks are being built on top of those modules,
seems to me to suggest a new "standard" implementation should be added
to the standard library.

I realize that I'm all talk and no code -- I've been busy, but I hope to
rectify the imbalance tonight.

Dustin

[1] http://mail.python.org/pipermail/python-dev/2007-February/071076.html
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generic async io (was: microthreading vs. async io)

2007-02-15 Thread dustin
On Thu, Feb 15, 2007 at 04:28:17PM +0100, Joachim K?nig-Baltes wrote:
> No, I'd like to have:
> 
> - An interface for a task to specifiy the events it's interested in, and 
>   waiting for at least one of the events (with a timeout).
> - an interface for creating a task (similar to creating a thread)
> - an interface for a schedular to manage the tasks

I think this discussion would be facilitated by teasing the first
bullet-point from the latter two: the first deals with async IO, while
the latter two deal with cooperative multitasking.

It's easy to write a single package that does both, but it's much harder
to write *two* fairly generic packages with a clean API between them,
given the varied platform support for async IO and the varied syntax and
structures (continuations vs. microthreads, in my terminology) for
multitasking.  Yet I think that division is exactly what's needed.

Since you asked (I'll assume the check for $0.02 is in the mail), I
think a strictly-async-IO library would offer the following:

 - a sleep queue object to which callables can be added
 - wrappers for all/most of the stdlib blocking IO operations which
   add the operation to the list of outstanding operations and return
   a sleep queue object
   - some relatively easy method of extending that for new IO operations
 - a poll() function (for multitasking libraries) and a serve_forever()
   loop (for asyncore-like uses, where all the action is IO-driven)

The mechanisms for accomplishing all of that on the chosen platform
would be an implementation detail, possibly with some intitialization
"hinting" from the application.  The library would also need to expose
its platform-based limitations (can't wait on thd.join(), can only wait
on 64 fd's, etc.) to the application for compatibility-checking
purposes.

Thoughts?

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread dustin
On Thu, Feb 15, 2007 at 04:51:30PM +0100, Joachim K?nig-Baltes wrote:
> The style used in asyncore, inheriting from a class and calling return 
> in a method
> and being called later at a different location (different method) just 
> interrupts the
> sequential flow of operations and makes it harder to understand. The same is
> true for all other strategies using callbacks or similar mechanisms.
> 
> All this can be achieved with a multilevel yield() that is hidden in a 
> function call.
> So the task does a small step down (wait) in order to jump up (yield) to 
> the scheduler
> without disturbing the eye of the beholder.

I agree -- I find that writing continuations or using asyncore's
structure makes spaghetti out of functionality that requires multiple
blocking operations inside looping or conditional statements.  The best
example, for me, was writing a complex site-specific web spider that had
to fetch 5-10 pages in a certain sequence, where each step in that
sequence depended on the results of the previous fetches.  I wrote it in
Twisted, but the proliferation of nested callback functions and chained
deferreds made my head explode while trying to debug it.  With a decent
microthreading library, that could look like:

def fetchSequence(...):
  fetcher = Fetcher()
  yield fetcher.fetchHomepage()
  firstData = yield fetcher.fetchPage('http://...')
  if someCondition(firstData):
while True:
  secondData = yield fetcher.fetchPage('http://...')
  # ...
  if someOtherCondition(secondData): break
  else:
# ...

which is *much* easier to read and debug.  (FWIW, after I put my head
back together, I rewrote the app with threads, and it now looks like the
above, without the yields.  Problem is, throttlling on fetches means 99%
of my threads are blocked on sleep() at any given time, which is just
silly).

All that said, I continue to contend that the microthreading and async
IO operations are separate.  The above could be implemented relatively
easily in Twisted with a variant of the microthreading module Phillip
posted earlier.  It could also be implemented atop a bare-bones
microthreading module with Fetcher using asyncore on the backend, or
even scheduler urllib.urlopen() calls into OS threads.  Presumably, it
could run in NanoThreads and Kamaelia too, among others.

What I want is a consistent syntax for microthreaded code, so that I
could write my function once and run it in *all* of those circumstances.

Dustin

P.S. For the record -- I've written lots of other apps in Twisted with
great success; this one just wasn't a good fit.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] microthreading vs. async io

2007-02-15 Thread dustin
On Thu, Feb 15, 2007 at 11:47:27AM -0500, Jean-Paul Calderone wrote:
> Is the only problem here that this style of development hasn't had been made
> visible enough?

Yep -- I looked pretty hard about two years ago, and although I haven't
been looking for that specifically since, I haven't heard anything about
it.

The API docs don't provide a good way to find things like this, and the
Twisted example tutorial didn't mention it at my last check.

So if we have an in-the-field implementation of this style of
programming (call it what you will), is it worth *standardizing* that
style so that it's the same in Twisted, my library, and anyone else's
library that cares to follow the standard?  

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generic async io (was: microthreading vs. async io)

2007-02-15 Thread dustin
On Thu, Feb 15, 2007 at 07:46:59PM +, Nick Maclaren wrote:
> [EMAIL PROTECTED] wrote:
> >
> > I think this discussion would be facilitated by teasing the first
> > bullet-point from the latter two: the first deals with async IO, while
> > the latter two deal with cooperative multitasking.
> > 
> > It's easy to write a single package that does both, but it's much harder
> > to write *two* fairly generic packages with a clean API between them,
> > given the varied platform support for async IO and the varied syntax and
> > structures (continuations vs. microthreads, in my terminology) for
> > multitasking.  Yet I think that division is exactly what's needed.
> 
> The 'threading' approach to asynchronous I/O was found to be a BAD
> IDEA back in the 1970s, was abandoned in favour of separating
> asynchronous I/O from threading, and God alone knows why it was
> reinvented - except that most of the people with prior experience
> had died or retired :-(


Knowing the history of something like this is very helpful, but I'm not
sure what you mean by this first paragraph.  I think I'm most unclear
about the meaning of "The 'threading' approach to asynchronous I/O"?
Its opposite ("separating asynchronous I/O from threading") doesn't
illuminate it much more.  Could you elaborate?

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generic async io (was: microthreading vs. async io)

2007-02-17 Thread dustin
On Fri, Feb 16, 2007 at 01:28:01PM +1300, Greg Ewing wrote:
> Nick Maclaren wrote:
> 
> > Threading
> > -
> > 
> > An I/O operation passes a buffer, length, file and action and receives a
> > token back.
> 
> You seem to be using the word "threading" in a completely
> different way than usual here, which may be causing some
> confusion.

According to subsequent clarification, the kind of IO Nick is talking
about is the sort of thing described recently on kerneltrap:
   http://kerneltrap.org/node/7728
(although he was referring specifically to POSIX async IO)

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another traceback idea [was: except Exception as err, tb]

2007-03-02 Thread dustin
On Sat, Mar 03, 2007 at 11:00:53AM +1300, Greg Ewing wrote:
> Now, I'm not proposing that the raise statement should
> actually have the above syntax -- that really would be
> a step backwards. Instead it would be required to have
> one of the following forms:
> 
> raise ExceptionClass
> 
> or
> 
> raise ExceptionClass(args)

Eep, that's awkward.  If you are using exceptions for flow control, why
would you use the second form?  

Why not just allow both exception classes and exception instances to be
raised, and only instantiate-at-catch in the case of a raise of a class
and a catch with an "as" clause?  Then the auto-instantiation becomes a
"convenience" feature of catch, safely relegating it to an
easily-understood and easily-ignored corner of the user's
conceptualization of exception handling.

The above also looks a lot like the current syntax, but (unless I'm
mistaken) ExceptionClass will be instantiated immediately right now.  It
seems best not to change the semantics of existing syntax if not
necessary.

I've been snoozing though this conversation until now, so if I've spoken
out of turn, please forgive me.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-06 Thread dustin
On Wed, Mar 07, 2007 at 01:50:25AM +0900, Stephen J. Turnbull wrote:
> Why not?  It depends on how far out "out" is, but I was surprised how
> much effect we (at XEmacs) got by simply asking people who contributed
> a couple of patches if they would like to take on tracking + patch
> flow management for their own patches in return for direct access to
> the repository.  

I've been thinking about how other projects handle this.  One method is
to appoint "maintainers" of specific pieces of functionality.  The term
"maintainers" was used earlier, but referring only to core developers
(those with commit access).  I have a somewhat different proposal:

In summary, create a layer of volunteer, non-committing maintainers for
specific modules who agree to do in-depth analysis of patches for their
areas of expertise, and pass well-formed, reviewed patches along to
committers.

Every part of Python gets a maintainer, annotated in the comments atop
the file.  A basic process is established for promotion/demotion of
maintainers.  New patches for a module get sent to that module's
maintainer, who checks for well-formedness, functionality, and
appropriateness to the module.  The *maintainer* can then refer
successful patches to the core developers, who can just skim the patch
and check that the unit test pass.

The core of the interpreter would be implicitly maintained by the core
developers, while each module or package of the stdlib is assigned to a
specific maintainer (or several, if more are willing).  New modules are
initially assigned to their author, while existing modules with no
apparent maintainer are assigned to a "maintainer-needed"
pseudo-maintainer.  Patches to maintainer-less modules would languish,
unless the submitter stepped up as maintainer, or yelled loudly enough
that the core devs processed the pach.

I think this would have several advantages:
 - maintainers can do the basic screening that takes so long and is no
   fun for core developers
 - being a maintainer can be a stepping-stone to becoming a full
   developer, for those who wish to join
 - patch authors have an advocate "inside the system"

Speaking personally, I don't want to be a core developer, but I would be
happy to maintain a half-dozen stdlib modules.

This is loosely based on the Gentoo project's idea of maintainers
(although in Gentoo maintainers must be full developers).  

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Encouraging developers

2007-03-06 Thread dustin
On Tue, Mar 06, 2007 at 01:51:41PM -0600, [EMAIL PROTECTED] wrote:
> 
> dustin> In summary, create a layer of volunteer, non-committing
> dustin> maintainers for specific modules who agree to do in-depth
> dustin> analysis of patches for their areas of expertise, and pass
> dustin> well-formed, reviewed patches along to committers.
> 
> One problem with this sort of system is that it's difficult for many people
> to commit the personal resources necessary over a long period of time.  Life
> often gets in the way.   

This is *definitely* the core problem with this system, and has plagued
every project to use a variant of it (including many small projects with
only one developer who takes months to respond to email).  I think one
*advantage* of this system would be that, with patch submitters having a
specific person to whom their patches should be addressed,
non-responsiveness on that person's part would be detected and brought
to the community's attention more quickly.  

It would help a great deal to have a very formalized system in place for
promoting/demoting maintainers -- email templates with filterable
subject lines and specific addresses to send them to, specific expected
response times, etc.

As someone else said in another thread, we all think that everyone
thinks like us (I think that's tautological?).  My thinking is that a
lot of people like me would love to have a small "corner of Python" for
which they are responsible.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Patch 1644818: Allow importing built-in submodules

2007-03-12 Thread dustin
On Mon, Mar 12, 2007 at 07:20:56PM +0100, Miguel Lobo wrote:
>I'm not complaining or anything, and no offence meant to anyone, just
>explaining my point of view.  I might still try to do the 5 patch
>review thing, depending on how long it takes me.  But if I choose not
>to do so, leaving my patch to rot only harms CPython, not me.

Miguel, last week there was a lengthy conversation on this list on this
exact topic.  Your point of view, which I hold to be very common, came
up a few times, but thanks for stating it so clearly!

Dustin

P.S. Please note I am *not* trying to re-open that conversation ;-)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rationale for NamedTemporaryFile?

2007-03-17 Thread dustin
On Sun, Mar 18, 2007 at 11:49:55AM +1200, Greg Ewing wrote:
> I've just discovered the hard way that NamedTemporaryFile
> automatically deletes the file when you close it. That
> doesn't seem very useful to me, since surely the reason
> you're using NamedTemporaryFile instead of TemporaryFile
> is that you want to do something else with it afterwards?
> What's the rationale for this behaviour?

For both TemporaryFile and NamedTemporaryFile, the rationale is that
"temporary" extends until the object is garbage collected.
TemporaryFile is deleted immediately (to prevent other applications from
modifying your temporary file).  NamedTemporaryFile is inteded for use
when you need access to the file by filename during the lifetime of the
file.  In either case, when the object goes, the file should too.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] styleguide inconsistency

2007-04-23 Thread dustin
On Mon, Apr 23, 2007 at 09:25:48AM -0700, Brett Cannon wrote:
> I personally think the style guide should just go and/or redirect to
> PEP 8.  I didn't even know it existed until this email.  And I only
> know of people updated PEP 8.

The top of the style guide basically does that:

  This style guide has been converted to several PEPs (Python
  Enhancement Proposals): PEP 8 for the main text, PEP 257 for docstring
  conventions. See the PEP index.

Perhaps that warning could be strengthened to suggest that the style
guide is outdated and (apparently) unfinished?  Beyond that, I don't see
any contradiction here, and I don't see any reason to spend time
updating the style guide.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] whitespace normalization

2007-04-25 Thread dustin
On Wed, Apr 25, 2007 at 08:40:22AM +0100, Duncan Booth wrote:
> IMHO, changing whitespace retrospectively in a version control system is a 
> bad idea.

In my experience Duncan's assertion is absolutely true, and all the more
so in a project that maintains a large body of pending patches, as
Python does.  Spurious whitespaces changes in the repo will generate
patch rejections that will drive both maintainers (as they check in a
patch) and submitters (as they try to refresh patches against head) mad.

I'm very much interested in VC systems *not* driving developers mad!

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] 2.5 branch unfrozen

2007-04-25 Thread dustin
On Wed, Apr 25, 2007 at 05:00:42PM -0400, Terry Reedy wrote:
> Does the SVN tracker (presuming there is one) take RFEs? 

  http://subversion.tigris.org/project_issues.html

But I would expect that this proposal will not pass the "buddy system"
(which is a cute idea IMHO), as locking is counter to the Subversion
Way.

Also, you could accomplish what you want (a locked branch) with a
pre-commit hook that just scans for paths in that branch.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] svn logs

2007-05-08 Thread dustin
On Tue, May 08, 2007 at 03:33:22PM +, Kristj?n Valur J?nsson wrote:
>Does anyone know why getting the SVN logs for a project is so
>excruciatingly slow?
> 
>Is this an inherent SVN problem or are the python.org servers simply
>overloaded?

I believe it's because there are multiple requests required to get the
whole thing, but I don't know the details.  You'll notice that svn
annotate is also really slow.  One thing you can do to help is to
specify a range of revisions you'd like to see.

This has been the case with just about every remote repository I've ever
accessed.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] generators and with

2007-05-13 Thread dustin
On Sun, May 13, 2007 at 04:56:15PM +0200, tomer filiba wrote:
>why not add __enter__ and __exit__ to generator objects?
>it's really a trivial addition: __enter__ returns self, __exit__ calls
>close().
>it would be used to ensure close() is called when the generator is
>disposed,
>instead of doing that manually. typical usage would be:
>with mygenerator() as g:
>g.next()
>bar = g.send("foo")
>-tomer

A better example may help to make your case.  Would this do?

with mygeneratorfn() as g:
x = get_datum()
while g.send(x):
x = get_next(x)

The idea then is that you can't just use a 'for' loop (which will call
close() itself, IIRC) because you want access to the generator itself,
not just the return values from g.next().

I wouldn't have a problem with this proposal, but I consider the snippet
above to be fairly obscure Python already; the requirement to call
g.close() is not a great burden on someone capable of using g.send() et
al.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] bpo-42819 PR review request

2021-02-08 Thread Dustin Rodrigues
Hello,

I submitted https://bugs.python.org/issue42819 a little over a month ago with 
an accompanying PR: https://github.com/python/cpython/pull/24108.

Would it be possible to get feedback on it? This addresses a bug that occurs 
when Python is compiled with the most recent release of GNU Readline and 
affects both Linux and macOS.

Thanks,
Dustin
___
Python-Dev mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/[email protected]/message/5GHQPXP37QMI4SQPJFI7SUBE4HGOOON4/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-Dev] [Python-ideas] PEP 3156 - Asynchronous IO Support Rebooted

2013-01-04 Thread Dustin Mitchell
As the maintainer of a pretty large, complex app written in Twisted, I 
think this is great.  I look forward to a future of being able to select 
from a broad library of async tools, and being able to write tools that can 
be used outside of Twisted.

Buildbot began, lo these many years ago, doing a lot of things in memory on 
on local disk, neither of which require asynchronous IO.  So a lot of API 
methods did not originally return Deferreds.  Those methods are then used 
by other methods, many of which also do not return Deferreds.  Now, we want 
to use a database backend, and parallelize some of the operations, meaning 
that the methods need to return a Deferred.  Unfortunately, that requires a 
complete tree traversal of all of the methods and methods that call them, 
rewriting them to take and return Deferreds.  There's no "halfway" 
solution.  This is a little easier with generators (@inlineCallbacks), 
since the syntax doesn't change much, but it's a significant change to the 
API (in fact, this is a large part of the reason for the big rewrite for 
Buildbot-0.9.x).

I bring all this up to say, this PEP will introduce a new "kind" of method 
signature into standard Python, one which the caller must know, and the use 
of which changes the signature of the caller.  That can cause sweeping 
changes, and debugging those changes can be tricky.  Two things can help:

First, `yield from somemeth()` should work fine even if `somemeth` is not a 
coroutine function, and authors of async tools should be encouraged to use 
this form to assist future-compatibility.  Second, `somemeth()` without a 
yield should fail loudly if `somemeth` is a coroutine function.  Otherwise, 
the effects can be pretty confusing.

In http://code.google.com/p/uthreads, I accomplished the latter by taking 
advantage of garbage collection: if the generator is garbage collected 
before it's begun, then it's probably not been yielded.  This is a bit 
gross, but good enough as a debugging technique.

On the topic of debugging, I also took pains to make sure that tracebacks 
looked reasonable, filtering out scheduler code[1].  I haven't looked 
closely at Tulip to see if that's a problem.  Most of the "noise" in the 
tracebacks came from the lack of 'yield from', so it may not be an issue at 
all.

Dustin

[1] 
http://code.google.com/p/uthreads/source/browse/trunk/uthreads/core.py#253
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The docs, reloaded

2007-05-19 Thread Dustin J. Mitchell
On Sat, May 19, 2007 at 10:48:29AM -0700, Josiah Carlson wrote:
> I'm generally a curmudgeon when it comes to 'the docs could be done
> better'.  But this?  I like it.  A lot.  Especially if you can get these
> other features in:
> 
> > - a "quick-dispatch" function: e.g., docs.python.org/q?os.path.split would
> >redirect you to the matching location.

Seconded! -- even if it's just for modules, this would be great.  

I can't count the times I've wished I could type e.g.,
'docs.python.org/httplib' the way I can type 'php.net/array_search' to
try to find out whether the needle comes before or after the haystack.

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Decoding libpython frame information on the stack

2007-06-28 Thread Dustin J. Mitchell
On Thu, Jun 28, 2007 at 09:41:06AM +0100, Mithun R N wrote:
> Am a new subscriber to this list.
> Am facing an issue in deciphering core-files of
> applications with mixed C and libpython frames in it.
> 
> I was thinking of knowing any work that has been done
> with respect to getting into the actual python line
> (file-name.py:) from the libpython frames
> on the stack while debugging such core-files. If
> anybody knows some information on this, please let me
> know. I could not get any link on the web that talks
> about this feature.

Dave Beazley once worked on this subject:

  
http://www.usenix.org/events/usenix01/full_papers/beazley/beazley_html/index.html

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-ideas] PEP 3156 - Asynchronous IO Support Rebooted

2013-01-12 Thread Dustin J. Mitchell
On Wed, Jan 9, 2013 at 12:14 AM, Guido van Rossum  wrote:
> But which half? A socket is two independent streams, one in each
> direction. Twisted uses half_close() for this concept but unless you
> already know what this is for you are left wondering which half. Which
> is why I like using 'write' in the name.

FWIW, "half-closed" is, IMHO, a well-known term.  It's not just a Twisted thing.

Either name is better than "shutdown"!

Dustin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com