Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Noam Raphael
Hello,

I thought about another reason to resize the hash table when it has
too few elements. It's not only a matter of memory usage, it's also a
matter of time usage: iteration over a set or a dict requires going
over all the table. For example, iteration over a set which once had
1,000,000 members and now has 2 can take 1,000,000 operations every
time you traverse all the (2) elements.

Apologies:
1. It may be trivial to you - I'm sorry, I thought about it just now.
2. You can, of course, still do whatever tradeoff you like.

Noam
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Doc-SIG] that library reference, again

2005-12-31 Thread Laura Creighton
In a message of Sat, 31 Dec 2005 15:41:50 +1000, Nick Coghlan writes:
>Ian Bicking wrote:
>> Anyway, another even more expedient option would be setting up a 
>> separate bug tracker (something simpler to submit to than SF) and 
>> putting a link on the bottom of every page, maybe like: 
>> http://trac.python.org/trac/newticket?summary=re:+/path/to/doc&componen
>t=docs 
>> -- heck, we all know SF bug tracking sucks, this is a good chance to 
>> experiment with a different tracker, and documentation has softer 
>> requirements other parts of Python.
>
>While I quite like this idea, would it make it more difficult when the bu
>g 
>tracking for the main source code is eventually migrated off SF? And what
> 
>would happen to existing documentation bug reports/patches on the SF trac
>kers?
>
>Is it possible to do something similar for the online version of the curr
>ent 
>docs, simply pointing them at the SF tracker? (I know this doesn't help p
>eople 
>without an SF account. . .)
>
>Cheers,
>Nick.

Not if the problem is that documentation changes are not 'patches' and
'bugs' and the sourceforge bug tracker, which isn't even particularly
good at tracking bugs is particularly ill-suited for the collaborative
sharing of documents.

Laura

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Raymond Hettinger
[Noam]
> For example, iteration over a set which once had
> 1,000,000 members and now has 2 can take 1,000,000 operations every
> time you traverse all the (2) elements.

Do you find that to be a common or plausible use case?

Was Guido's suggestion of s=set(s) unworkable for some reason?  dicts
and sets emphasize fast lookups over fast iteration -- apps requiring
many iterations over a collection may be better off converting to a list
(which has no dummy entries or empty gaps between entries).

Would the case be improved by incurring the time cost of 999,998 tests
for possible resizing (one for each pop) and some non-trivial number of
resize operations along the way (each requiring a full-iteration over
the then current size)?

Even if this unique case could be improved, what is the impact on common
cases?  Would a downsizing scheme risk thrashing with the
over-allocation scheme in cases with mixed adds and pops?

Is there any new information/research beyond what has been obvious from
the moment the dict resizing scheme was born?



Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Doc-SIG] that library reference, again

2005-12-31 Thread Ian Bicking
Nick Coghlan wrote:
>>Anyway, another even more expedient option would be setting up a 
>>separate bug tracker (something simpler to submit to than SF) and 
>>putting a link on the bottom of every page, maybe like: 
>>http://trac.python.org/trac/newticket?summary=re:+/path/to/doc&component=docs 
>>-- heck, we all know SF bug tracking sucks, this is a good chance to 
>>experiment with a different tracker, and documentation has softer 
>>requirements other parts of Python.
> 
> 
> While I quite like this idea, would it make it more difficult when the bug 
> tracking for the main source code is eventually migrated off SF? And what 
> would happen to existing documentation bug reports/patches on the SF trackers?

I think the requirements for documentation are a bit lighter, so it's 
not as big a deal.  E.g., the history of bug reports on documentation 
isn't as important, either the ones from SF, or if all of Python moves 
to a new system then the history of the transitional system. 
Documentation is mostly self-describing.

> Is it possible to do something similar for the online version of the current 
> docs, simply pointing them at the SF tracker? (I know this doesn't help 
> people 
> without an SF account. . .)

Perhaps; I haven't played with the SF interface at all, so I don't know 
if you can prefill fields.  But it's still a pain, since logging into SF 
isn't seemless (since you don't get redirected back to where you started 
from).  Also, I don't know if the requirements for documentation match 
the code generally.  Being able to follow up on documentation bugs isn't 
as important, so if you don't always collect the submitters email 
address it's not that big a deal.  Doc maintainers may be willing to 
filter through a bit more spam if it means that they get more 
submissions, and so forth.  The review process probably isn't as 
important.  So I think it could be argued that code and documentation 
shouldn't even be on the same tracker.  (I'm not really arguing that, 
but at least it doesn't seem like a big a deal if they aren't on the 
same system)

-- 
Ian Bicking  |  [EMAIL PROTECTED]  |  http://blog.ianbicking.org
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Noam Raphael
On 12/31/05, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
> [Noam]
> > For example, iteration over a set which once had
> > 1,000,000 members and now has 2 can take 1,000,000 operations every
> > time you traverse all the (2) elements.
>
> Do you find that to be a common or plausible use case?

I don't have a concrete example in this minute, but a set which is
repeatedly filled with elements and then emptied by pop operations
doesn't seem to me that far-fetched.
>
> Was Guido's suggestion of s=set(s) unworkable for some reason?  dicts
> and sets emphasize fast lookups over fast iteration -- apps requiring
> many iterations over a collection may be better off converting to a list
> (which has no dummy entries or empty gaps between entries).

It's workable, but I think that most Python programmers haven't read
that specific message, and are expecting operations which should take
a short time to take a short time. Converting to a list won't help the
use-case above, and anyway, it's another thing that I wouldn't expect
anyone to do - there's no reason that iteration over a set should take
a long time.

(I'm speaking of my point of view, which I believe is common. I don't
expect programs I write in Python to be super-fast - if that were the
case, I would write them in C. I do expect them to take a reasonable
amount of time, and in the case of iteration over a set, that means a
time proportional to the number of elements in the set.)
>
> Would the case be improved by incurring the time cost of 999,998 tests
> for possible resizing (one for each pop) and some non-trivial number of
> resize operations along the way (each requiring a full-iteration over
> the then current size)?
>
I believe it would. It seems to me that those 999,998 tests take not
much more than a machine clock, which means about 1 milisecond on
todays computers. Those resize operations will take some more
miliseconds. It all doesn't really matter, since probably all other
things will take much more. I now run this code

>>> s = set()
>>> for j in xrange(100):
... s.add(j)
...
>>> while s:
... tmp = s.pop()
...

And it took 2.4 seconds to finish. And it's okay - I'm just saying
that a few additional clock ticks per operation will usually not
matter when the overall complexity is the same, but changes in order
of complexity can matter much more.

> Even if this unique case could be improved, what is the impact on common
> cases?  Would a downsizing scheme risk thrashing with the
> over-allocation scheme in cases with mixed adds and pops?
>
I think that there shouldn't be additional damage beyond those clock
ticks. The simple method I took from "Introduction to Algorithms"
works no matter what sequence of adds and pops you have.

> Is there any new information/research beyond what has been obvious from
> the moment the dict resizing scheme was born?

I wanted to say that there isn't any new information, and yet I don't
think that I have to assume that everything in current Python is the
best that can be. All I did was finding another reason why a
downsizing scheme might be good, and posting it to ask if people have
thought about it. If you have a document listing all the design
decisions that went into dict implementation, then please send it to
me and I won't ask about things that were already thought about.

But the answer is, yes. I beleive that the current dict resizing
scheme was born before the iterator protocol was introduced, and it
may be a reason why the current scheme doesn't try to minimize the
number of empty hash table entries.

Noam
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Fernando Perez
Raymond Hettinger wrote:

> Was Guido's suggestion of s=set(s) unworkable for some reason?  dicts
> and sets emphasize fast lookups over fast iteration -- apps requiring
> many iterations over a collection may be better off converting to a list
> (which has no dummy entries or empty gaps between entries).
> 
> Would the case be improved by incurring the time cost of 999,998 tests
> for possible resizing (one for each pop) and some non-trivial number of
> resize operations along the way (each requiring a full-iteration over
> the then current size)?

Note that this is not a comment on the current discussion per se, but rather a
small request/idea in the docs department: I think it would be a really useful
thing to have a summary page/table indicating the complexities of the various
operations on all the builtin types, including at least _mention_ of subtleties
and semi-gotchas.

Python is growing in popularity, and it is being used for more and more
demanding tasks all the time.  Such a 'complexity overview' of the language's
performance would, I think, be very valuable to many.   I know that much of
this information is available, but I'm talking about a specific summary, which
also discusses things like Noam's issue.  

For example, I had never realized that on dicts, for some O(N) operations, N
would mean "largest N in the dict's history" instead of "current number of
elements".  While I'm not arguing for any changes, I think it's good to _know_
this, so I can plan for it if I am ever in a situation where it may be a
problem.

Just my 1e-2.

And Happy New Year to the python-dev team, with many thanks for all your
fantastic work on making the most pleasant, useful programming language out
there.

Cheers,

f

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Josiah Carlson

Noam Raphael <[EMAIL PROTECTED]> wrote:
> 
> On 12/31/05, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
> > [Noam]
> > > For example, iteration over a set which once had
> > > 1,000,000 members and now has 2 can take 1,000,000 operations every
> > > time you traverse all the (2) elements.
> >
> > Do you find that to be a common or plausible use case?
> 
> I don't have a concrete example in this minute, but a set which is
> repeatedly filled with elements and then emptied by pop operations
> doesn't seem to me that far-fetched.

It doesn't seem far-fetched, but I've not seen anything like it.  List
appending and popping, yeah, set differences and intersections and
unions, yeah, but not set insertion then removal for large numbers of
items.

Note that you provide insertion into a set then repeated popping as an
example, which is done faster by other methods.

> (I'm speaking of my point of view, which I believe is common. I don't
> expect programs I write in Python to be super-fast - if that were the
> case, I would write them in C. I do expect them to take a reasonable
> amount of time, and in the case of iteration over a set, that means a
> time proportional to the number of elements in the set.)

That is a reasonable point of view.  But realize that depending on the
shrinking strategy, popping/deletion will take ammortized 2+ times
longer than they do now, and whose benefits include (and are basically
limited to): memory cam be freed to the operating system, repeated
iteration over a resized-smaller dictionary can be faster.

> > Would the case be improved by incurring the time cost of 999,998 tests
> > for possible resizing (one for each pop) and some non-trivial number of
> > resize operations along the way (each requiring a full-iteration over
> > the then current size)?
> >
> I believe it would. It seems to me that those 999,998 tests take not
> much more than a machine clock, which means about 1 milisecond on
> todays computers. Those resize operations will take some more
> miliseconds. It all doesn't really matter, since probably all other
> things will take much more. I now run this code
> 
> >>> s = set()
> >>> for j in xrange(100):
> ... s.add(j)
> ...
> >>> while s:
> ... tmp = s.pop()
> ...
> 
> And it took 2.4 seconds to finish. And it's okay - I'm just saying
> that a few additional clock ticks per operation will usually not
> matter when the overall complexity is the same, but changes in order
> of complexity can matter much more.

Doing that while loop will take _longer_ with a constantly resizing set. 
The only way that resizing a dict/set as it gets smaller will increase
overall running speed is if iteration over the dict/set occurs anywhere
between 2-100 times (depending on the resizing factor)


> > Even if this unique case could be improved, what is the impact on common
> > cases?  Would a downsizing scheme risk thrashing with the
> > over-allocation scheme in cases with mixed adds and pops?
> >
> I think that there shouldn't be additional damage beyond those clock
> ticks. The simple method I took from "Introduction to Algorithms"
> works no matter what sequence of adds and pops you have.

You may get more memory fragmentation, depending on the underlying
memory manager.


> > Is there any new information/research beyond what has been obvious from
> > the moment the dict resizing scheme was born?
> 
> I wanted to say that there isn't any new information, and yet I don't
> think that I have to assume that everything in current Python is the
> best that can be. All I did was finding another reason why a
> downsizing scheme might be good, and posting it to ask if people have
> thought about it. If you have a document listing all the design
> decisions that went into dict implementation, then please send it to
> me and I won't ask about things that were already thought about.

See the source for dictobject.c and dictnotes.txt:
http://svn.python.org/view/python/trunk/Objects/dictobject.c?rev=39608&view=auto
http://svn.python.org/view/python/trunk/Objects/dictnotes.txt?rev=35428&view=auto


 - Josiah

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Tim Peters
[Noam Raphael]
>>> For example, iteration over a set which once had
>>> 1,000,000 members and now has 2 can take 1,000,000 operations every
>>> time you traverse all the (2) elements.

[Raymond Hettinger]
>> Do you find that to be a common or plausible use case?

[Naom]
> I don't have a concrete example in this minute, but a set which is
> repeatedly filled with elements and then emptied by pop operations
> doesn't seem to me that far-fetched.

Ah, but that's an entirely different example than the one you started
with:  every detail counts when you're looking for "bad" cases.  In
this new example, the set _does_ get resized, as soon as you start
adding elements again.  OTOH, in the absence of repeated iteration
too, it's not clear that this resizing helps more than it hurts.

...

> I wanted to say that there isn't any new information, and yet I don't
> think that I have to assume that everything in current Python is the
> best that can be.

It was in 2005; 2006 is an entirely different year ;-)

> All I did was finding another reason why a downsizing scheme might
> be good, and posting it to ask if people have thought about it.

Not all that much -- sets whose sizes bounce around a lot, and which
are also iterated over a lot, haven't stuck out as an important use
case.  Typically, if a set or dict gets iterated over at all, that
happens once near the end of its life.

> If you have a document listing all the design decisions that went into
> dict implementation, then please send it to me and I won't ask about
> things that were already thought about.

Lots of info in the source; Josiah already pointed at the most useful dict docs.

> But the answer is, yes. I beleive that the current dict resizing
> scheme was born before the iterator protocol was introduced, and it
> may be a reason why the current scheme doesn't try to minimize the
> number of empty hash table entries.

Dict resizing was designed before the Python-level iteration protocol,
but under the covers dicts offered the PyDict_Next() C-level iteration
protocol "forever".  It's not the iteration protocol (or lack thereof)
that drives this.

Far more important is that dicts have always been heavily used by
Python itself, in its own implementation, for a variety of namespaces:
 the global dict, originally the local dict too, for class dicts and
instance dicts, and to pass keyword arguments.  Note that all those
cases use strings for keys, and in fact Python originally supported
only string-keyed dicts.  In all those cases too, deletions are at
worst very rare, and iteration a minor use case.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Raymond Hettinger
> > > [Noam]
> > > > For example, iteration over a set which once had
> > > > 1,000,000 members and now has 2 can take 1,000,000 operations
every
> > > > time you traverse all the (2) elements.
> > >
> > > Do you find that to be a common or plausible use case?
> >
> > I don't have a concrete example in this minute, but a set which is
> > repeatedly filled with elements and then emptied by pop operations
> > doesn't seem to me that far-fetched.
> 
> It doesn't seem far-fetched, but I've not seen anything like it.

It's more far-fetched when fully spelled-out:

Build a VERY large list, ALMOST empty it with pop operations, then
iterate over it MANY times (enough to offset the cost of multiple resize
operations with their attendant memory allocator interactions and the
expensive block copies (cache misses are a certitude and each miss is as
expensive as a floating point divide)).

Also note, this example was not selected from a real-world use-case; it
was contrived for purposes of supporting an otherwise weak proposal.




> > > Would the case be improved by incurring the time cost of 999,998
tests
> > > for possible resizing (one for each pop) and some non-trivial
number
> of
> > > resize operations along the way (each requiring a full-iteration
over
> > > the then current size)?
> > >
> > I believe it would. It seems to me that those 999,998 tests take not
> > much more than a machine clock, which means about 1 milisecond on
> > todays computers. Those resize operations will take some more
> > miliseconds. It all doesn't really matter, since probably all other
> > things will take much more. I now run this code
> >
> Doing that while loop will take _longer_ with a constantly resizing
set.
> The only way that resizing a dict/set as it gets smaller will increase
> overall running speed is if iteration over the dict/set occurs
anywhere
> between 2-100 times (depending on the resizing factor)

Josiah is exactly correct.  The resize operations are enormously
expensive compared to the cost of an iteration.  You would have to do
the latter many times to make up for the costs of repeatedly downsizing
the set.




> > > Even if this unique case could be improved, what is the impact on
> common
> > > cases?  Would a downsizing scheme risk thrashing with the
> > > over-allocation scheme in cases with mixed adds and pops?
> > >
> > I think that there shouldn't be additional damage beyond those clock
> > ticks. The simple method I took from "Introduction to Algorithms"
> > works no matter what sequence of adds and pops you have.
> 
> You may get more memory fragmentation, depending on the underlying
> memory manager.

There's more a risk than fragmentation.  Trashing is a basic concern.
There is no way around it -- some combination of adds and pops always
triggers it when both upsizing and downsizing logic are present.  The
code in listobject.c works hard to avoid this but there are still
patterns which would trigger horrid behavior with a resize occurring
every few steps.



> > > Is there any new information/research beyond what has been obvious
> from
> > > the moment the dict resizing scheme was born?
> >
> > I wanted to say that there isn't any new information, and yet I
don't
> > think that I have to assume that everything in current Python is the
> > best that can be. All I did was finding another reason why a
> > downsizing scheme might be good, and posting it to ask if people
have
> > thought about it. If you have a document listing all the design
> > decisions that went into dict implementation, then please send it to
> > me and I won't ask about things that were already thought about.
> 
> See the source for dictobject.c and dictnotes.txt:
>
http://svn.python.org/view/python/trunk/Objects/dictobject.c?rev=39608&v
ie
> w=auto
> http://svn

Those are both good references.  

The code for general purpose dicts has been fine-tuned, reviewed, and
field-tested to a highly polished level.  It is at a point where most
attempts to improve it will make it worse-off.

There may be some room for development in special versions of
dictionaries for specific use cases.  For instance, it may be worthwhile
to create a version emphasizing size or access speed over insertion time
(using Brent's variation to optimizing search order).



Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] When do sets shrink?

2005-12-31 Thread Raymond Hettinger
[Fernando Perez]
> Note that this is not a comment on the current discussion per se, but
> rather a
> small request/idea in the docs department: I think it would be a
really
> useful
> thing to have a summary page/table indicating the complexities of the
> various
> operations on all the builtin types, including at least _mention_ of
> subtleties
> and semi-gotchas.

The wiki might be the place to cover this sort of thing.  Unlike
infrequent doc releases, the medium is good at being responsive to
whatever someone thought important enough to write an entry for.  Also,
it is more easily kept up-to-date for variations between versions
(Py2.4, Py2.5, etc.) and implementations (CPython, Jython, etc.).

The relevant list of these ideas may be somewhat short:
* mystring += frag  # use ''.join() instead
* mylist.insert(0, obj) # takes O(n) time to move all the elements
* if x in y:# runs in O(n) time if y is a sequence

I think the number one performance gotcha is adopting a COBOL-like code
writing mentality and failing to creatively use Python's powerful
collection types:  list, tuple, dict, set, str and an occasional array,
deque, or cStringIO object.



> For example, I had never realized that on dicts, for some O(N)
operations,
> N
> would mean "largest N in the dict's history" instead of "current
number of
> elements".

It might be better to give more generic advice that tends to be true
across implementations and versions:  "Dense collections like lists
tuples iterate faster than sparse structures like dicts and sets.
Whenever repeated iteration starts to dominate application run-time,
consider converting to a dense representation for faster iteration and
better memory/cache utilization."  A statement like this remains true
whether or not a down-sizing algorithm is present.



> Cheers,
> 
> f

Hmm, your initial may be infringing on another developer's trademarked
signature ;-)



Raymond



Side note:  To some degree, ignorance is bliss.  Most of my code was
written in AWK and I was content to know only one non-algorithmic
optimization ("exp" vs /exp/).  Time was spent thinking about the
problem at hand rather than how to outsmart the interpreter.  Knowing
too much about the implementation can be a distraction.  Besides, when
timing does become critical, people seem to suddenly become
spontaneously ingenious.



___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Weekly Python Patch/Bug Summary

2005-12-31 Thread Kurt B. Kaiser
Patch / Bug Summary
___

Patches :  382 open ( +3) /  3003 closed ( +1) /  3385 total ( +4)
Bugs:  903 open (-11) /  5479 closed (+27) /  6382 total (+16)
RFE :  203 open ( -1) /   195 closed ( +2) /   398 total ( +1)

New / Reopened Patches
__

NotImplemented->TypeError in __add__ and __mul__  (2005-12-26)
CLOSED http://python.org/sf/1390657  opened by  Armin Rigo

dict.merge  (2005-12-27)
   http://python.org/sf/1391204  opened by  Nicolas Lehuen

cookielib LWPCookieJar and MozillaCookieJar exceptions  (2005-02-06)
   http://python.org/sf/1117398  reopened by  jjlee

Optional second argument for startfile  (2005-12-29)
   http://python.org/sf/1393157  opened by  Thomas Heller

Add restart debugger command to pdb.py  (2005-12-30)
   http://python.org/sf/1393667  opened by  Rocky Bernstein

Patches Closed
__

NotImplemented->TypeError in __add__ and __mul__  (2005-12-26)
   http://python.org/sf/1390657  closed by  arigo

weakref callbacks are called only if the weakref is alive  (2005-12-12)
   http://python.org/sf/1379023  closed by  arigo

New / Reopened Bugs
___

Incorrectly docs for return values of set update methods  (2005-12-24)
CLOSED http://python.org/sf/1389673  opened by  Collin Winter

Fxn call in _elementtree.c has incorrect signedness  (2005-12-24)
CLOSED http://python.org/sf/1389809  opened by  Brett Cannon

_winreg specifies EnvironmentError instead of WindowsError  (2005-12-21)
CLOSED http://python.org/sf/1386675  reopened by  birkenfeld

ScrolledText hungs up in some conditions  (2005-12-25)
   http://python.org/sf/1390086  opened by  dani_filth

README mention --without-cxx  (2005-12-25)
   http://python.org/sf/1390321  opened by  Aahz

time docs lack %F in strftime!  (2005-12-26)
CLOSED http://python.org/sf/1390605  opened by  Nikos Kouremenos

split() breaks no-break spaces  (2005-12-26)
CLOSED http://python.org/sf/1390608  opened by  MvR

time.strftime('%F', local_time) is okay but time.strptime no  (2005-12-26)
CLOSED http://python.org/sf/1390629  opened by  Nikos Kouremenos

lambda functions confused when mapped in dictionary object  (2005-12-27)
CLOSED http://python.org/sf/1390991  opened by  Samuel Hsiung

missing module names in email package  (2005-12-27)
   http://python.org/sf/1391608  opened by  Gabriel Genellina

floating point literals don't work in non-US locale in 2.5  (2005-12-28)
   http://python.org/sf/1391872  opened by  Fredrik Lundh

build fails on BSD 3.8  (2005-12-30)
   http://python.org/sf/1392915  opened by  George Yoshida

cannot build SVN trunk on old systems  (2005-12-29)
   http://python.org/sf/1393109  opened by  Fredrik Lundh

Deleting first item causes anydbm.first() to fail  (2005-12-30)
   http://python.org/sf/1394135  opened by  Dan Bisalputra

urllib2 raises exception when page redirects to itself  (2005-12-31)
CLOSED http://python.org/sf/1394453  opened by  René Pijlman

SimpleHTTPServer doesn't understand query arguments  (2005-12-31)
   http://python.org/sf/1394565  opened by  Aaron Swartz

'Plus' filemode exposes uninitialized memory on win32  (2005-12-31)
   http://python.org/sf/1394612  opened by  Cory Dodt

Bugs Closed
___

Decimal sqrt() ignores rounding  (2005-12-23)
   http://python.org/sf/1388949  closed by  facundobatista

Incorrect docs for return values of set update methods  (2005-12-24)
   http://python.org/sf/1389673  closed by  birkenfeld

Fxn call in _elementtree.c has incorrect signedness  (2005-12-25)
   http://python.org/sf/1389809  closed by  effbot

_winreg specifies EnvironmentError instead of WindowsError  (2005-12-21)
   http://python.org/sf/1386675  closed by  birkenfeld

time docs lack %F in strftime!  (2005-12-26)
   http://python.org/sf/1390605  closed by  birkenfeld

split() breaks no-break spaces  (2005-12-26)
   http://python.org/sf/1390608  closed by  lemburg

time.strftime('%F', local_time) is okay but time.strptime no  (2005-12-26)
   http://python.org/sf/1390629  closed by  birkenfeld

metaclasses, __getattr__, and special methods  (2003-04-29)
   http://python.org/sf/729913  closed by  arigo

special methods become static  (2004-11-15)
   http://python.org/sf/1066490  closed by  arigo

len() on class broken  (2005-12-16)
   http://python.org/sf/1382740  closed by  arigo

urllib.url2pathname, pathname2url doc strings inconsistent  (2002-12-07)
   http://python.org/sf/649974  closed by  birkenfeld

PyLong_AsVoidPtr()/PyLong_FromVoidPtr()  (2002-12-14)
   http://python.org/sf/653542  closed by  birkenfeld

Acrobat Reader 5 compatibility  (2003-04-14)
   http://python.org/sf/721160  closed by  birkenfeld

Calling socket.recv() with a large number breaks  (2003-06-17)
   http://python.org/sf/756104  closed by  birkenfeld

Automated daily documentation builds  (2002-06-26)
   http://python.org/sf/574241  closed by  birke