Re: [Python-Dev] release plan for 2.5 ?
On Sat, Feb 11, 2006 at 10:38:10PM -0800, Neal Norwitz wrote: > On 2/10/06, Georg Brandl <[EMAIL PROTECTED]> wrote: > > I am not experienced in releasing, but with the multitude of new things > > introduced in Python 2.5, could it be a good idea to release an early alpha > > not long after all (most of?) the desired features are in the trunk? > In the past, all new features had to be in before beta 1 IIRC (it > could have been beta 2 though). The goal is to get things in sooner, > preferably prior to alpha. Well, in the past, features -- even syntax changes -- have gone in between the last beta and the final release (but reminding Guido might bring him to tears of regret. ;) Features have also gone into what would have been 'bugfix releases' if you looked at the numbering alone (1.5 -> 1.5.1 -> 1.5.2, for instance.) "The past" doesn't have a very impressive track record... However, beta 1 is a very good ultimate deadline, and it's been stuck by for the last few years, AFAIK. But I concur with: > For 2.5, we should strive really hard to get features implemented > prior to alpha 1. Some of the changes (AST, ssize_t) are pervasive. > AST while localized, ripped the guts out of something every script > needs (more or less). ssize_t touches just about everything it seems. that as many features as possible, in particular the broad-touching ones, should be in alpha 1. -- Thomas Wouters <[EMAIL PROTECTED]> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] ssize_t branch (Was: release plan for 2.5 ?)
Neal Norwitz wrote: > I'm tempted to say we should merge now. I know the branch works on > 64-bit boxes. I can test on a 32-bit box if Martin hasn't already. > There will be a lot of churn fixing problems, but maybe we can get > more people involved. The ssize_t branch has now all the API I want it to have. I just posted the PEP to comp.lang.python, maybe people have additional things they consider absolutely necessary. There are two aspects left, and both can be done after the merge: - a lot of modules still need adjustments, to really support 64-bit collections. This shouldn't cause any API changes, AFAICT. - the printing of Py_ssize_t values should be supported. I think Tim proposed to provide the 'z' formatter across platforms. This is a new API, but it's a pure extension, so it can be done in the trunk. I would like to avoid changing APIs after the merge to the trunk has happened; I remember Guido saying (a few years ago) that this change must be a single large change, rather many small incremental changes. I agree, and I hope I have covered everything that needs to be covered. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] nice()
I've been thinking
about a function that was recently proposed at python-dev named 'areclose'.
It is a function that is meant to tell whether two (or possible more) numbers
are close to each other. It is a function similar to one that exists in
Numeric. One such implementation is
def
areclose(x,y,abs_tol=1e-8,rel_tol=1e-5):
diff =
abs(x-y)
return diff
<= ans_tol or diff <= rel_tol*max(abs(x),abs(y))
(This is the form given by
Scott Daniels on python-dev.)
Anyway, one of
the rationales for including such a function was:
When teaching some
programming to total newbies, a common frustrationis how to explain why
a==b is False when a and b are floats computedby different routes which
``should'' give the same results (ifarithmetic had infinite precision).
Decimals can help, but anotherapproach I've found useful is embodied
in Numeric.allclose(a,b) --which returns True if all items of the arrays
are ``close'' (equal towithin certain absolute and relative
tolerances)
The problem with the above
function, however, is that it *itself* has a comparison between floats and
it will give undesired result for something like the following
test:
###
>>> print areclose(2,
2.1, .1, 0) #see if 2 and 2.1 are within 0.1 of each other
False
>>>
###
Here is an alternative that
might be a nice companion to the repr() and round() functions: nice(). It is a
combination of Tim Peter's delightful 'case closed' presentation in the thread,
"Rounding to n significant digits?" [1] and the hidden magic of "prints"
simplification of floating point numbers when being asked to show them.
It's default behavior is to
return a number in the form that the number would have when being printed. An
optional argument, however, allows the user to specify the number of digits to
round the number to as counted from the most significant digit. (An alternative
name, then, could be 'lround' but I think there is less baggage for the new user
to think about if the name is something like nice()--a function that makes the
floating point numbers "play nice." And I also think the name...sounds
nice.)
Here it is in
action:
###
>>>
3*1.1==3.3False>>>
nice(3*1.1)==nice(3.3)True>>> x=3.21/0.65; print
x4.93846153846
>>> print
nice(x,2)
4.9>>> x=x*1e5;
print nice(x,2)49.0###
Here's the function:
###
def
nice(x,leadingDigits=0): """Return x either as 'print' would show it
(the default) or rounded to the specified digit as counted from the
leftmost non-zero digit of the number,
e.g. nice(0.00326,2)
--> 0.0033""" assert leadingDigits>=0 if
leadingDigits==0: return float(str(x)) #just give it back like
'print' would give it leadingDigits=int(leadingDigits) return
float('%.*e' % (leadingDigits,x)) #give it back as rounded by the %e
format###
Might something like this be useful? For new
users, no arguments are needed other than x and floating points suddenly seem to
behave in tests made using nice() values. It's also useful for those computing
who want to show a physically meaningful value that has been rounded to the
appropriate digit as counted from the most significant digit rather than from
the decimal point.
Some time back I had worked on the
significant digit problem and had several math calls to figure out what the
exponent was. The beauty of Tim's solution is that you just use built in string
formatting to do the work. Nice.
/c
[1] http://mail.python.org/pipermail/tutor/2004-July/030324.html
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] release plan for 2.5 ?
"Phillip J. Eby" <[EMAIL PROTECTED]> writes: > At 12:21 PM 2/10/2006 -0800, Guido van Rossum wrote: >> >PEP 343: The "with" Statement >> >>Didn't Michael Hudson have a patch? > > PEP 343's "Accepted" status was reverted to "Draft" in October, and then > changed back to "Accepted". I believe the latter change is an error, since > you haven't pronounced on the changes. Have you reviewed the __context__ > stuff that was added? > > In any case Michael's patch was pre-AST branch merge, and no longer > reflects the current spec. It also never quite reflected the spec at the time, although I forget the detail it didn't support :/ Cheers, mwh -- 81. In computing, turning the obvious into the useful is a living definition of the word "frustration". -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Fwd: Ruby/Python Continuations: Turning a block callback into a read()-method ?
Fwd: news:<[EMAIL PROTECTED]>
After failing on a yield/iterator-continuation problem in Python (see
below) I tried the Ruby (1.8.2) language first time on that construct:
The example tries to convert a block callback interface
(Net::FTP.retrbinary) into a read()-like iterator function in order to
virtualize the existing FTP class as kind of file system. 4 bytes max
per read in this first simple test below. But it fails on the second
continuation with ThreadError after this second continuation really
executing!? Any ideas how to make this work/correct?
(The question is not about the specific FTP example as it - e.g. about a
rewrite of FTP/retrbinary or use of OS tricks, real threads with polling
etc... - but about the continuation language trick to get the execution
flow right in order to turn any callback interface into an "enslaved
callable iterator". Python can do such things in simple situations with
yield-generator functions/iter.next()... But Python obviously fails by a
hair when there is a function-context barrier for "yield". Ruby's
block-yield-mechanism seems to not at all have the power of real
generator-continuation as in Python, but in principle only to be that
what a normal callback would be in Python. Yet "callcc" seemes to be
promising - I thought so far :-( )
=== Ruby callcc Pattern : execution fails with ThreadError!? ===
require 'net/ftp'
module Net
class FTPFile
def initialize(ftp,path)
@ftp = ftp
@path=path
@flag=true
@iter=nil
end
def read
if @iter
puts "@iter.call"
@iter.call
else
puts "RETR "[EMAIL PROTECTED]
@ftp.retrbinary("RETR "[EMAIL PROTECTED],4) do |block|
print "CALLBACK ",block,"\n"
callcc{|@iter| @flag=true}
if @flag
@flag=false
return block
end
end
end
end
end
end
ftp = Net::FTP.new("localhost",'user','pass')
ff = Net::FTPFile.new(ftp,'data.txt')
puts ff.read()
puts ff.read()
=== Output/Error
vs:~/test$ ruby ftpfile.rb
RETR data.txt
CALLBACK robe
robe
@iter.call
CALLBACK rt
/usr/lib/ruby/1.8/monitor.rb:259:in `mon_check_owner': current thread
not owner (ThreadError)
from /usr/lib/ruby/1.8/monitor.rb:211:in `mon_exit'
from /usr/lib/ruby/1.8/monitor.rb:231:in `synchronize'
from /usr/lib/ruby/1.8/net/ftp.rb:399:in `retrbinary'
from ftpfile.rb:17:in `read'
from ftpfile.rb:33
vs:~/test$
=== Python Pattern : I cannot write down the idea because of a barrier ===
I tried a pattern like:
def open(self,ftppath,mode='rb'):
class FTPFile:
...
def iter_retr()
...
def callback(blk):
how-to-yield-from-here-as-iter_retr blk???
self.ftp.retrbinary("RETR %s" % self.relpath,callback)
def read(self, bytes=-1):
...
self.buf+=self.iter.next()
...
...
=
Robert
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Tutor] nice()
I have no particularly strong view on the concept (except that I usually
see the "problem" as a valuable opportunity to introduce a concept
that has far wider reaching consequences than floating point
numbers!).
However I do dislike the name nice() - there is already a nice() in the
os module with a fairly well understood function. But I'm sure some
time with a thesaurus can overcome that single mild objection. :-)
Alan G
Author of the learn to program web tutor
http://www.freenetpages.co.uk/hp/alan.gauld
- Original Message -
From: "Smith" <[EMAIL PROTECTED]>
To:
Cc: ;
Sent: Sunday, February 12, 2006 6:44 PM
Subject: [Tutor] nice()
I've been thinking about a function that was recently proposed at python-dev
named 'areclose'. It is a function that is meant to tell whether two (or
possible more) numbers are close to each other. It is a function similar to
one that exists in Numeric. One such implementation is
def areclose(x,y,abs_tol=1e-8,rel_tol=1e-5):
diff = abs(x-y)
return diff <= ans_tol or diff <= rel_tol*max(abs(x),abs(y))
(This is the form given by Scott Daniels on python-dev.)
Anyway, one of the rationales for including such a function was:
When teaching some programming to total newbies, a common frustration
is how to explain why a==b is False when a and b are floats computed
by different routes which ``should'' give the same results (if
arithmetic had infinite precision). Decimals can help, but another
approach I've found useful is embodied in Numeric.allclose(a,b) --
which returns True if all items of the arrays are ``close'' (equal to
within certain absolute and relative tolerances)
The problem with the above function, however, is that it *itself* has a
comparison between floats and it will give undesired result for something
like the following test:
###
>>> print areclose(2, 2.1, .1, 0) #see if 2 and 2.1 are within 0.1 of each
>>> other
False
>>>
###
Here is an alternative that might be a nice companion to the repr() and
round() functions: nice(). It is a combination of Tim Peter's delightful
'case closed' presentation in the thread, "Rounding to n significant
digits?" [1] and the hidden magic of "prints" simplification of floating
point numbers when being asked to show them.
It's default behavior is to return a number in the form that the number
would have when being printed. An optional argument, however, allows the
user to specify the number of digits to round the number to as counted from
the most significant digit. (An alternative name, then, could be 'lround'
but I think there is less baggage for the new user to think about if the
name is something like nice()--a function that makes the floating point
numbers "play nice." And I also think the name...sounds nice.)
Here it is in action:
###
>>> 3*1.1==3.3
False
>>> nice(3*1.1)==nice(3.3)
True
>>> x=3.21/0.65; print x
4.93846153846
>>> print nice(x,2)
4.9
>>> x=x*1e5; print nice(x,2)
49.0
###
Here's the function:
###
def nice(x,leadingDigits=0):
"""Return x either as 'print' would show it (the default) or rounded to the
specified digit as counted from the leftmost non-zero digit of the number,
e.g. nice(0.00326,2) --> 0.0033"""
assert leadingDigits>=0
if leadingDigits==0:
return float(str(x)) #just give it back like 'print' would give it
leadingDigits=int(leadingDigits)
return float('%.*e' % (leadingDigits,x)) #give it back as rounded by the %e
format
###
Might something like this be useful? For new users, no arguments are needed
other than x and floating points suddenly seem to behave in tests made using
nice() values. It's also useful for those computing who want to show a
physically meaningful value that has been rounded to the appropriate digit
as counted from the most significant digit rather than from the decimal
point.
Some time back I had worked on the significant digit problem and had several
math calls to figure out what the exponent was. The beauty of Tim's solution
is that you just use built in string formatting to do the work. Nice.
/c
[1] http://mail.python.org/pipermail/tutor/2004-July/030324.html
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Tutor] nice()
"Alan Gauld" <[EMAIL PROTECTED]> wrote:
> However I do dislike the name nice() - there is already a nice() in the
> os module with a fairly well understood function. But I'm sure some
> time with a thesaurus can overcome that single mild objection. :-)
Presumably it would be located somewhere like the math module.
- Josiah
> Alan G
> Author of the learn to program web tutor
> http://www.freenetpages.co.uk/hp/alan.gauld
>
>
>
> - Original Message -
> From: "Smith" <[EMAIL PROTECTED]>
> To:
> Cc: ;
> Sent: Sunday, February 12, 2006 6:44 PM
> Subject: [Tutor] nice()
>
>
> I've been thinking about a function that was recently proposed at python-dev
> named 'areclose'. It is a function that is meant to tell whether two (or
> possible more) numbers are close to each other. It is a function similar to
> one that exists in Numeric. One such implementation is
>
> def areclose(x,y,abs_tol=1e-8,rel_tol=1e-5):
> diff = abs(x-y)
> return diff <= ans_tol or diff <= rel_tol*max(abs(x),abs(y))
>
> (This is the form given by Scott Daniels on python-dev.)
>
> Anyway, one of the rationales for including such a function was:
>
> When teaching some programming to total newbies, a common frustration
> is how to explain why a==b is False when a and b are floats computed
> by different routes which ``should'' give the same results (if
> arithmetic had infinite precision). Decimals can help, but another
> approach I've found useful is embodied in Numeric.allclose(a,b) --
> which returns True if all items of the arrays are ``close'' (equal to
> within certain absolute and relative tolerances)
> The problem with the above function, however, is that it *itself* has a
> comparison between floats and it will give undesired result for something
> like the following test:
>
> ###
> >>> print areclose(2, 2.1, .1, 0) #see if 2 and 2.1 are within 0.1 of each
> >>> other
> False
> >>>
> ###
>
> Here is an alternative that might be a nice companion to the repr() and
> round() functions: nice(). It is a combination of Tim Peter's delightful
> 'case closed' presentation in the thread, "Rounding to n significant
> digits?" [1] and the hidden magic of "prints" simplification of floating
> point numbers when being asked to show them.
>
> It's default behavior is to return a number in the form that the number
> would have when being printed. An optional argument, however, allows the
> user to specify the number of digits to round the number to as counted from
> the most significant digit. (An alternative name, then, could be 'lround'
> but I think there is less baggage for the new user to think about if the
> name is something like nice()--a function that makes the floating point
> numbers "play nice." And I also think the name...sounds nice.)
>
> Here it is in action:
>
> ###
> >>> 3*1.1==3.3
> False
> >>> nice(3*1.1)==nice(3.3)
> True
> >>> x=3.21/0.65; print x
> 4.93846153846
> >>> print nice(x,2)
> 4.9
> >>> x=x*1e5; print nice(x,2)
> 49.0
> ###
>
> Here's the function:
> ###
> def nice(x,leadingDigits=0):
> """Return x either as 'print' would show it (the default) or rounded to the
> specified digit as counted from the leftmost non-zero digit of the number,
>
> e.g. nice(0.00326,2) --> 0.0033"""
> assert leadingDigits>=0
> if leadingDigits==0:
> return float(str(x)) #just give it back like 'print' would give it
> leadingDigits=int(leadingDigits)
> return float('%.*e' % (leadingDigits,x)) #give it back as rounded by the %e
> format
> ###
>
> Might something like this be useful? For new users, no arguments are needed
> other than x and floating points suddenly seem to behave in tests made using
> nice() values. It's also useful for those computing who want to show a
> physically meaningful value that has been rounded to the appropriate digit
> as counted from the most significant digit rather than from the decimal
> point.
>
> Some time back I had worked on the significant digit problem and had several
> math calls to figure out what the exponent was. The beauty of Tim's solution
> is that you just use built in string formatting to do the work. Nice.
>
> /c
>
> [1] http://mail.python.org/pipermail/tutor/2004-July/030324.html
>
> ___
> Python-Dev mailing list
> [email protected]
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> http://mail.python.org/mailman/options/python-dev/jcarlson%40uci.edu
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP 343: Context managers a superset of decorators?
Forgive me if someone has already come up with this; I know I am coming to the party several months late. All of the proposals for decorators (including the accepted one) seemed a bit kludgey to me, and I couldn't figure out why. When I read PEP 343, I realized that they all provide a solution for an edge case without addressing the larger problem. If context managers are provided access to the contained and containing namespaces of their with statement, they can perform the same function that decorators do now. A transforming class could be implemented as: ## Code Start - class DecoratorContext(object): def __init__(self, func): self.func = func def __context__(self): return self def __enter__(self, contained, containing): pass def __exit__(self, contained, containing): for k,v in contained.iteritems(): containing[k] = self.func(v) ## Code End --- With this in place, decorators can be used with the with statement: ## Code Start - classmethod = DecoratorContext(classmethod) class foo: def __init__(self, ...): pass with classmethod: def method1(cls, ...): pass def method2(cls, ...): pass ## Code End --- The extra level of indention could be avoided by dealing with multiple block-starting statements on a line by stating that all except the last block contain only one statement: ## Code Start - classmethod = DecoratorContext(classmethod) class foo: def __init__(self, ...): pass with classmethod: def method1(cls, ...): pass with classmethod: def method2(cls, ...): pass ## Code End --- I will readily admit that I have no idea how difficult either of these suggestions would be to implement, or if it would be a good idea to do so. At this point, they are just something to think about -- Eric Sumner ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 343: Context managers a superset of decorators?
Eric Sumner <[EMAIL PROTECTED]> wrote: > Forgive me if someone has already come up with this; I know I am > coming to the party several months late. All of the proposals for > decorators (including the accepted one) seemed a bit kludgey to me, > and I couldn't figure out why. When I read PEP 343, I realized that > they all provide a solution for an edge case without addressing the > larger problem. [snip code samples] > I will readily admit that I have no idea how difficult either of these > suggestions would be to implement, or if it would be a good idea to do > so. At this point, they are just something to think about Re-read the decorator PEP: http://www.python.org/peps/pep-0318.html to understand why both of these options (indentation and prefix notation) are undesireable for a general decorator syntax. The desire for context managers to have access to its enclosing scope is another discussion entirely, though it may do so without express permission via stack frame manipulation. - Josiah ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Fwd: Ruby/Python Continuations: Turning a block callback into a read()-method ?
Robert wrote: > Any ideas how to make this work/correct? Why is that a question for python-dev? Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Pervasive socket failures on Windows
Neal Norwitz wrote: > On 2/11/06, Tim Peters <[EMAIL PROTECTED]> wrote: > >>>[Tim telling how I broke pyuthon] >> >>[Martin fixing it] > > > Sorry for the breakage (I didn't know about the Windows issues). > Thank you Martin for fixing it. I agree with the solution. > > I was away from mail, ahem, "working". > yeah, right, at your off-site boondoggle south of the border. we know. regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC www.holdenweb.com PyCon TX 2006 www.python.org/pycon/ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Baffled by PyArg_ParseTupleAndKeywords modification
Martin v. Löwis wrote: > then, in C++, 4.4p4 [conv.qual] has a rather longish formula to > decide that the assignment is well-formed. In essence, it goes > like this: > > [A large head-exploding set of rules] Blarg. Const - Just Say No. Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] nice()
Smith wrote: > When teaching some programming to total newbies, a common frustration > is how to explain why a==b is False when a and b are floats computed > by different routes which ``should'' give the same results (if > arithmetic had infinite precision). This is just a special case of the problems inherent in the use of floating point. As with all of these, papering over this particular one isn't going to help in the long run -- another one will pop up in due course. Seems to me it's better to educate said newbies not to use algorithms that require comparing floats for equality at all. In my opinion, if you ever find yourself trying to do this, you're not thinking about the problem correctly, and your algorithm is simply wrong, even if you had infinitely precise floats. Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 351
Bengt Richter wrote: > Anyhow, why shouldn't you be able to call freeze(an_ordinary_list) and get > back freeze(xlist(an_ordinary_list)) > automatically, based e.g. on a freeze_registry_dict[type(an_ordinary_list)] > => xlist lookup, if plain hash fails? [Cue: sound of loud alarm bells going off in Greg's head] -1 on having any kind of global freezing registry. If we need freezing at all, I think it would be quite sufficient to have a few types around such as frozenlist(), frozendict(), etc. I would consider it almost axiomatic that code needing to freeze something will know what type of thing it is freezing. If it doesn't, it has no business attempting to do so. If you need to freeze something not covered by the standard frozen types, write your own class or function to handle it, and invoke it explicitly where appropriate. Greg ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
