Re: [Python-Dev] PEP for RFE 46738 (first draft)

2005-06-22 Thread Fredrik Lundh
Skip Montanaro wrote:

> That's fine, so XML-RPC is slower than Gherkin.  I can't run the Gherkin
> code, but my XML-RPC numbers are a bit different than yours:
>
> XMLRPC encode 0.65 seconds
> XMLRPC decode 2.61 seconds
>
> That leads me to believe you're not using any sort of C XML decoder.  (I
> mentioned sgmlop in my previous post.  I'm sure /F has some other
> super-duper accelerator that's even faster.)

the CET/iterparse-based decoder described here

 http://effbot.org/zone/element-iterparse.htm

is 3-4 times faster (but I don't recall if I used sgmlop or just plain
expat in those tests).





___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] python-dev Summary for 2005-05-16 through 2005-05-31 [draft]

2005-06-22 Thread Tony Meyer
You may have noticed that the summaries have been absent for the last month
- apologies for that; Steve has been dutifully doing his part, but I've been
caught up with other things.

Anyway, Steve will post the May 01-15 draft shortly, and here's May 16-31.
We should be able to get the first June one done fairly shortly, too.

If anyone has time to flick over this and let me/Steve/Tim know if you have
corrections, that would be great; thanks!

=Tony.Meyer

=
Announcements
=


QOTF


We have our first ever Quote of the Fortnight (QOTF), thanks to
the wave of discussion over `PEP 343`_ and Jack Diederich:

I still haven't gotten used to Guido's heart-attack inducing early
enthusiasm for strange things followed later by a simple
proclamation I like.  Some day I'll learn that the sound of
fingernails on the chalkboard is frequently followed by candy for
the whole class.

See, even threads about anonymous block statements can end happily! ;)

.. _PEP 343: http://www.python.org/peps/pep-0343.html

Contributing thread:

- `PEP 343 - Abstract Block Redux
`__

[SJB]

--
First PyPy Release
--

The first release of `PyPy`_, the Python implementation of Python, is
finally available. The PyPy team has made impressive progress, and
the current release of PyPy now passes around 90% of the Python
language regression tests that do not depend deeply on C-extensions.
The PyPy interpreter still runs on top of a CPython interpreter
though, so it is still quite slow due to the double-interpretation
penalty.

.. _PyPy: http://codespeak.net/pypy

Contributing thread:

- `First PyPy (preview) release
`__


[SJB]


Thesis: Type Inference in Python


Brett C. successfully defended his masters thesis `Localized Type
Inference of Atomic Types in Python`_, which investigates some of the
issues of applying type inference to the current Python language, as
well as to the Python language augmented with type annotations.
Congrats Brett!

.. _Localized Type Inference of Atomic Types in Python:
http://www.drifty.org/thesis.pdf

Contributing thread:

- `Localized Type Inference of Atomic Types in Python
`__


[SJB]

=
Summaries
=

---
PEP 343 and With Statements
---

The discussion on "anonymous block statements" brought itself closer
to a real conclusion this fortnight, with the discussion around
`PEP 343`_ and `PEP 3XX`_ converging not only on the semantics for
"with statements", but also on semantics for using generators as
with-statement templates.

To aid in the adaptation of generators to with-statements, Guido
proposed adding close() and throw() methods to generator objects,
similar to the ones suggested by `PEP 325`_ and `PEP 288`_. The
throw() method would cause an exception to be raised at the point
where the generator is currently suspended, and the close() method
would use throw() to signal the generator to clean itself up by
raising a GeneratorExit exception.

People seemed generally happy with this proposal and -- believe it or
not -- we actually went an entire eight days without an email about
anonymous block statements!! It looked as if an updated `PEP 343`_,
including the new generator functionality, would be coming early the
next month. So stay tuned. ;)

.. _PEP 288: http://www.python.org/peps/pep-0288.html

.. _PEP 325: http://www.python.org/peps/pep-0325.html

.. _PEP 343: http://www.python.org/peps/pep-0343.html

.. _PEP 3XX: http://members.iinet.net.au/~ncoghlan/public/pep-3XX.html

Contributing threads:

- `PEP 343 - Abstract Block Redux
`__
- `Simpler finalization semantics (was Re: PEP 343 - Abstract Block Redux)
`__
- `Example for PEP 343
`__
- `Combining the best of PEP 288 and PEP 325: generator exceptions and
cleanup
`__
- `PEP 343 - New kind of yield statement?
`__
- `PEP 342/343 status?
`__
- `PEP 346: User defined statements (formerly known as PEP 3XX)
`__

[SJB]

---
Decimal FAQ
---

Raymond Hettinger suggested that a decimal FAQ would shorten the module's
learning curve, and drafted one.  There  were no objections, but few
adjustments (to the list, at least).  Raymond will probably make the FAQ
available at  some point.

Contributing 

Re: [Python-Dev] Is PEP 237 final -- Unifying Long Integers and Integers

2005-06-22 Thread Gareth McCaughan
[GvR:]
> > Huh? C unsigned ints don't flag overflow either -- they perform
> > perfect arithmetic mod 2**32.

[Keith Dart:]
> I was talking about signed ints. Sorry about the confusion. Other
> scripting languages (e.g. perl) do not error on overflow.

C signed ints also don't flag overflow, nor do they -- as you
point out -- in various other languages.

> > (c) The right place to do the overflow checks is in the API wrappers,
> > not in the integer types.
> 
> That would be the "traditional" method.
> 
> I was trying to keep it an object-oriented API. What should "know" the 
> overflow condition is the type object itself. It raises OverFlowError any 
> time this occurs, for any operation, implicitly. I prefer to catch errors 
> earlier, rather than later.

Why "should"?

Sure, catch errors earlier. But *are* the things you'd catch
earlier by having an unsigned-32-bit-integer type actually
errors? Is it, e.g., an "error" to move the low 16 bits into
the high part by writing
x = (y<<16) & 0x
instead of
x = (y&0x) << 16
or to add 1 mod 2^32 by writing
x = (y+1) & 0x
instead of
if y == 0x: x = 0
else: x = y+1
? Because it sure doesn't seem that way to me. Why is it better,
or more "object-oriented", to have the checking done by a fixed-size
integer type?

> > (b) I don't know what you call a "normal" integer any more; to me,
> > unified long/int is as normal as they come. Trust me, that's the case
> > for most users. Worrying about 32 bits becomes less and less normal.
> 
> By "normal" integer I mean the mathematical definition.

Then you aren't (to me) making sense. You were distinguishing
this from a unified int/long. So far as I can see, a unified int/long
type *does* implement (modulo implementation limits and bugs)
the "mathematical definition". What am I missing?

> Most Python users 
> don't have to worry about 32 bits now, that is a good thing when you are 
> dealing only with Python. However, if one has to interface to other 
> systems that have definite types with limits, then one must "hack around" 
> this feature.

Why is checking the range of a parameter with a restricted range
a "hack"?

Suppose some "other system" has a function in its interface that
expects a non-zero integer argument, or one with its low bit set.
Do we need a non-zero-integer type and an odd-integer type?

>   I was just thinking how nice it would be if Python had, in 
> addition to unified ("real", "normal") integers it also had built-in
> types that could be more easily mapped to external types (the typical
> set of signed, unsigned, short, long, etc.). Yes, you can check it at
> conversion time, but that would mean extra Python bytecode. It seems you
> think this is a special case, but I think Python may be used as a "glue
> language" fairly often, and some of us would benefit from having those
> extra types as built-ins.

Well, which extra types? One for each of 8, 16, 32, 64 bit and for
each of signed, unsigned? Maybe also "non-negative signed" of each
size? That's 12 new builtin types, so perhaps you'd be proposing a
subset; what subset?

And how are they going to be used?

  - If the conversion to one of these new limited types
occurs immediately before calling whatever function
it is that uses it, then what you're really doing is
a single range-check. Why disguise it as a conversion?

  - If the conversion occurs earlier, then you're restricting
the ways in which you can calculate the parameter values
in question. What's the extra value in that?

I expect I'm missing something important. Could you provide some
actual examples of how code using this new feature would look?

-- 
g

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Decimal floats as default (was: discussion about PEP239 and 240)

2005-06-22 Thread Fredrik Johansson
Hi all,

raymond.hettinger at verizon.net  Fri Jun 17 10:36:01 2005 wrote:

> The future direction of the decimal module likely entails literals in
> the form of 123.45d with binary floats continuing to have the form
> 123.45.  This conflicts with the rational literal proposal of having
> 123.45 interpreted as 123 + 45/100.

Decimal literals are a wonderful idea, especially if it means that
decimals and floats can be made to interact with each other directly.
But why not go one step further, making 123.45 decimal and 123.45b
binary?  In fact, I think a good case can be made for replacing the
default float type with a decimal type.

Decimal floats make life easier for humans accustomed to base 10, so
they should be easy to use. This is particularly relevant given
Python's relatively large user base of "non-programmers", but applies
to many domains. Easy-to-use, correct rounding is essential in many
applications that need to process human-readable data (round() would
certainly be more meaningful if it operated on decimals). Not to
mention that arbitrary precision arithmetic just makes the language
more powerful.

Rationals are inappropriate except in highly specialized applications
because of the non-constant size and processing time, but decimals
would only slow down programs by a (usually small) constant factor. I
suspect most Python programs do not demand the performance hardware
floats deliver, nor require the limited precision or particular
behaviour of IEEE 754 binary floats (the need for machine-precision
integers might be greater -- I've written "& 0xL" many times).

Changing to decimal would not significantly affect users who really
need good numeric performance either. The C interface would convert
Python floats to C doubles as usual, and numarray would function
accordingly. Additionally, "hardware" could be a special value for the
precision in the decimal (future float) context. In that case, decimal
floats could be phased in without breaking compatibility, by leaving
hardware as the default precision.

123.45d is better than Decimal("123.45"), but appending "d" to specify
a quantity with high precision is as illogical as appending "L" to an
integer value to bypass the machine word size limit. I think the step
from hardware floats to arbitrary-precision decimals would be as
natural as going from short to unlimited-size integers.

I've thought of the further implications for complex numbers and the
math library, but I'll stop writing here to listen to feedback in case
there is some obvious technical flaw or people just don't like the
idea :-) Sorry if this has been discussed and/or rejected before (this
is my first post to python-dev, though I've occasionally read the list
since I started using Python extensively about two years ago).

Fredrik Johansson
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is PEP 237 final -- Unifying Long Integers and Integers

2005-06-22 Thread Nick Coghlan
Gareth McCaughan wrote:
> [Keith Dart:]
>>By "normal" integer I mean the mathematical definition.
> 
> Then you aren't (to me) making sense. You were distinguishing
> this from a unified int/long. So far as I can see, a unified int/long
> type *does* implement (modulo implementation limits and bugs)
> the "mathematical definition". What am I missing?

Hmm, a 'mod_int' type might be an interesting concept (i.e. a type 
that performs integer arithmetic, only each operation is carried out 
modulo some integer).

Then particular bit sizes would be simple ints, modulo the appropriate 
power of two.

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Explicitly declaring expected exceptions for a block

2005-06-22 Thread Nick Coghlan
Dmitry Dvoinikov wrote:
> Now, back to original question then, do you think it'd be
> beneficial to have some sort of exception ignoring or expecting
> statement ?

Not really - as I said earlier, I usually have something non-trivial 
in the except or else clause, so I'm not simply ignoring the exceptions.

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python-dev Summary for 2005-05-16 through 2005-05-31 [draft]

2005-06-22 Thread Nick Coghlan
Tony Meyer wrote:
> .. _PEP 3XX: http://members.iinet.net.au/~ncoghlan/public/pep-3XX.html

Now that PEP 346 is on python.org, it would be best to reference that, 
rather than the PEP 3XX URL on my ISP web space (that URL is now just 
a redirect to PEP 346 on python.org).

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Propose updating PEP 284 -- Integer for-loops

2005-06-22 Thread David Eppstein
On 6/18/05 4:45 AM -0400 Raymond Hettinger <[EMAIL PROTECTED]> 
wrote:
> The above recommendations should get the PEP ready for judgement day.

I thought judgement day already happened for this PEP in the "Parade of 
PEPs".  No?

> Also, I recommend tightening the PEP's motivation.  There are only two
> advantages, encoding and readability.  The former is only a minor gain
> because all it saves is a function call, an O(1) savings in an O(n)
> context.  The latter is where the real benefits lay.

The readability part is really my only motivation for this.  In a nutshell, 
I would like a syntax that I could use with little or no explanation with 
people who already understand some concepts of imperative programming but 
have never seen Python before (e.g. when I use Python-like syntax for the 
pseudocode in my algorithms lectures).  The current for x in range(...) 
syntax is not quite there.

In practice, I have been using
for x in 0, 1, 2, ... n-1:
which does not work as actual programming language syntax but seems to 
communicate my point better than the available syntaxes.

I have to admit, among your proposed options

> for i between 2 < i <= 10: ...
> for i over 2 < i <= 10: ... # chained comparison style
> for i over [2:11]: ...  # Slice style
> for i = 3 to 10:  ...   # Basic style

I don't really care for the repetition of the variable name in the first 
two, the third is no more readable to me than the current range syntax, and 
the last one only looks ok to me because I used to program in Pascal, long 
ago.

-- 
David Eppstein
Computer Science Dept., Univ. of California, Irvine
http://www.ics.uci.edu/~eppstein/

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Decimal floats as default (was: discussion about PEP239 and 240)

2005-06-22 Thread Fredrik Johansson
On 6/22/05, Michael McLay <[EMAIL PROTECTED]> wrote:
> This idea is dead on arrival. The change would break many applications and
> modules. A successful proposal cannot break backwards compatibility. Adding a
> dpython interpreter to the current code base is one possiblity.

Is there actually much code around that relies on the particular
precision of 32- or 64-bit binary floats for arithmetic, and ceases
working when higher precision is available? Note that functions like
struct.pack would be unaffected. If compatibility is a problem, this
could still be a possibility for Python 3.0.

In either case, compatibility can be ensured by allowing both n-digit
decimal and hardware binary precision for floats, settable via a float
context. Then the backwards compatible binary mode can be default, and
"decimal mode" can be set with one line of code. d-suffixed literals
create floats with decimal precision.

There is the alternative of providing decimal literals by using
separate decimal and binary float base types, but in my eyes this
would be redundant. The primary use of binary floats is performance
and compatibility, and both can be achieved with my proposal without
sacrificing the simplicity and elegance of having a single type to
represent non-integral numbers. It makes more sense to extend the
float type with the power and versatility of the decimal module than
to have a special type side by side with a default type that is less
capable.

Fredrik
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is PEP 237 final -- Unifying Long Int egers and Integers

2005-06-22 Thread Gareth McCaughan
On Wednesday 2005-06-22 13:32, Nick Coghlan wrote:
> Gareth McCaughan wrote:
> > [Keith Dart:]
> >>By "normal" integer I mean the mathematical definition.
> > 
> > Then you aren't (to me) making sense. You were distinguishing
> > this from a unified int/long. So far as I can see, a unified int/long
> > type *does* implement (modulo implementation limits and bugs)
> > the "mathematical definition". What am I missing?
> 
> Hmm, a 'mod_int' type might be an interesting concept (i.e. a type 
> that performs integer arithmetic, only each operation is carried out 
> modulo some integer).
> 
> Then particular bit sizes would be simple ints, modulo the appropriate 
> power of two.

It might indeed, but it would be entirely the opposite of what
(I think) Keith wants, namely something that raises an exception
any time a value goes out of range :-).

-- 
g

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Decimal floats as default (was: discussion about PEP239 and 240)

2005-06-22 Thread Skip Montanaro

Fredrik> Is there actually much code around that relies on the
Fredrik> particular precision of 32- or 64-bit binary floats for
Fredrik> arithmetic, and ceases working when higher precision is
Fredrik> available? 

Umm, yeah...  The path you take from one or more string literals
representing real numbers through a series of calculations and ending up in
a hardware double-precision floating point number is probably going to be
different at different precisions.

>>> x = Decimal("1.1")
>>> y = Decimal("1.024567")
>>> x
Decimal("1.1")
>>> y
Decimal("1.024567")
>>> float(x) 
1.0999
>>> float(y)
1.0246
>>> x/y
Decimal("1.0754328")
>>> float(x)/float(y)
1.0753
>>> float(x/y)
1.0755

Performance matters too:

% timeit.py -s 'from decimal import Decimal ; x = 
Decimal("1.1") ; y = Decimal("1.024567")' 'x/y'
1000 loops, best of 3: 1.39e+03 usec per loop
% timeit.py -s 'from decimal import Decimal ; x = 
float(Decimal("1.1")) ; y = float(Decimal("1.024567"))' 
'x/y'
100 loops, best of 3: 0.583 usec per loop

I imagine a lot of people would be very unhappy if their fp calculations
suddenly began taking 2000x longer, even if their algorithms didn't break.
(For all I know, Raymond might have a C version of Decimal waiting for an
unsuspecting straight man to complain about Decimal's performance and give
him a chance to announce it.)

If nothing else, extension module code that executes

f = PyFloat_AsDouble(o);

or

if (PyFloat_Check(o)) {
   ...
}

would either have to change or those functions would have to be rewritten to
accept Decimal objects and convert them to doubles (probably silently,
because otherwise there would be so many warnings).

For examples of packages that might make large use of these functions, take
a look at Numeric, SciPy, ScientificPython, MayaVi, and any other package
that does lots of floating point arithmetic.

Like Michael wrote, I think this idea is DOA.

Skip
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Decimal floats as default (was: discussion about PEP239 and 240)

2005-06-22 Thread Fredrik Johansson
On 6/22/05, Skip Montanaro <[EMAIL PROTECTED]> wrote:
> If nothing else, extension module code that executes
> 
> f = PyFloat_AsDouble(o);
> 
> or
> 
> if (PyFloat_Check(o)) {
>...
> }
> 
> would either have to change or those functions would have to be rewritten to
> accept Decimal objects and convert them to doubles (probably silently,
> because otherwise there would be so many warnings).
> 

Silent conversion was the idea.

> Like Michael wrote, I think this idea is DOA.

Granted, then.

However, keeping binary as default does not kill the other idea in my
proposal, which is to extend the float type to cover decimals instead
of having a separate decimal type. I consider this a more important
issue (contradicting the thread title :-) than whether "d" should be
needed to specify decimal precision.

Fredrik
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] python-dev Summary for 2005-05-01 through 2005-05-16 [draft]

2005-06-22 Thread Steven Bethard
Here's the May 01-15 draft.  Sorry for the delay.  Please check the
Unicode summary at the end especially closely; I'm not entirely sure I
got that one all right.  Thanks!

As always, please let us know if you have any corrections!

Steve

==
Summary Announcements
==

--
PEP 340 Episode 2: Revenge of the With (Block)
--

This fornight's Python-Dev was dominated again by another nearly 400
messages on the topic of anonymous block statements. The discussion
was a little more focused than the last thanks mainly to Guido's
introduction of `PEP 340`_. Discussion of this PEP resulted in a
series of other PEPs, including

* `PEP 342`_: Enhanced Iterators, which broke out into a separate
  PEP the parts of `PEP 340`_ that allowed code to pass values into
  iterators using ``continue EXPR`` and yield-expressions.

* `PEP 343`_: Anonymous Block Redux, a dramatically simplified
  version of `PEP 340`_, which removed the looping nature of the
  anonymous blocks and the injection-of-exceptions semantics for
  generators.

* `PEP 3XX`_: User Defined ("with") Statements, which proposed
  non-looping anonymous blocks accompanied by finalization semantics
  for iterators and generators in for loops.

Various details of each of these proposals are discussed below in the
sections:

1. `Enhanced Iterators`_

2. `Separate APIs for Iterators and Anonymous Blocks`_

3. `Looping Anonymous Blocks`_

4. `Loop Finalization`_

At the time of this writing, it looked like the discussion was coming
very close to a final agreement; `PEP 343`_ and `PEP 3XX`_ both agreed
upon the same semantics for the block-statement, the keyword had been
narrowed down to either ``do`` or ``with``, and Guido had agreed to
add back in to `PEP 343`_ some form of exception-injection semantics
for generators.


.. _PEP 340: http://www.python.org/peps/pep-0340.html

.. _PEP 342: http://www.python.org/peps/pep-0342.html

.. _PEP 343: http://www.python.org/peps/pep-0343.html

.. _PEP 3XX: http://members.iinet.net.au/~ncoghlan/public/pep-3XX.html

[SJB]


=
Summaries
=

--
Enhanced Iterators
--

`PEP 340`_ incorporated a variety of orthogonal features into a single
proposal. To make the PEP somewhat less monolithic, the method for
passing values into an iterator was broken off into `PEP 342`_. This
method includes:

* updating the iterator protocol to use .__next__() instead of .next()

* introducing a new builtin next()

* allowing continue-statements to pass values into iterators

* allowing generators to receive values with a yield-expression

Though these features had seemed mostly uncontroversial, Guido seemed
inclined to wait for a little more motivation from the co-routiney
people before accepting the proposal.

Contributing threads:

- `Breaking off Enhanced Iterators PEP from PEP 340
`__

[SJB]



Separate APIs for Iterators and Anonymous Blocks


`PEP 340`_ had originally proposed to treat the anonymous block
protocol as an extension of the iterator protocol. Several problems
with this approach were raised, including:

* for-loops could accidentally be used with objects requiring blocks,
  meaning that resources would not get cleaned up properly

* blocks could be used instead of for-loops, violating TOOWTDI

As a result, both `PEP 343`_ and `PEP 3XX`_ propose decorators for
generator functions that will wrap the generator object appropriately
to match the anonymous block protocol. Generator objects without the
proposed decorators would not be usable in anonymous block statements.

Contributing threads:

- `PEP 340 -- loose ends
`__
- `PEP 340 -- concept clarification
`__

[SJB]



Looping Anonymous Blocks


A few issues arose as a result of `PEP 340`_'s formulation of
anonymous blocks as a variation on a loop.

Because the anonymous blocks of `PEP 340`_ were defined in terms of
while-loops, there was some discussion as to whether they should have
an ``else`` clause like Python ``for`` and ``while`` loops do. There
didn't seem to be one obvious interpretation of an ``else`` block
though, so Guido rejected the ``else`` block proposal.

The big issue with looping anonymous blocks, however, was in the
handling of ``break`` and ``continue`` statements. Many use cases for
anonymous blocks did not require loops. However, because `PEP 340`_
anonymous blocks were implemented in terms of loops, ``break`` and
``continue`` acted much like they would in a loop. This meant that in
code like::

for item in items:
with lock:
if handle(i

[Python-Dev] PEP 304 - is anyone really interested?

2005-06-22 Thread Skip Montanaro

I wrote PEP 304, "Controlling Generation of Bytecode Files":

http://www.python.org/peps/pep-0304.html

quite awhile ago.  The first version appeared in January 2003 in response to
questions from people about controlling/suppressing bytecode generation in
certain situations.  It sat idle for a long while, though from time-to-time
people would ask about the functionality and I'd respond or update the PEP.
In response to another recent question about this topic:

http://mail.python.org/pipermail/python-list/2005-June/284775.html

and a wave of recommendations by Raymond Hettinger regarding several other
PEPs, I updated the patch to work with current CVS.  Aside from one response
by Thomas Heller noting that my patch upload failed (and which has been
corrected since), I've seen no response either on python-dev or
comp.lang.python.

I really have no personal use for this functionality.  I control all the
computers on which I use Python and don't use any exotic hardware (which
includes Windows as far with its multi-rooted file system as far as I'm
concerned), don't run from read-only media or think that in-memory file
systems are much of an advantage over OS caching.  The best I will ever do
with it is respond to people's inputs.  I'd hate to see it sit for another
two years.  If someone out there is interested in this functionality and
would benefit more from its incorporation into the core, I'd be happy to
hand it off to you.

So speak up folks, otherwise my recommendation is that it be put out of its
misery.

Skip
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 304 - is anyone really interested?

2005-06-22 Thread Trent Mick
[Skip Montanaro wrote]
> 
> I wrote PEP 304, "Controlling Generation of Bytecode Files":
> 
> http://www.python.org/peps/pep-0304.html
> 
> ...
> So speak up folks, otherwise my recommendation is that it be put out of its
> misery.

I've had use for it before, but have managed to work around the
problems. I think it is a good feature, but I wouldn't have the time to
shepherd the patch.

Trent

-- 
Trent Mick
[EMAIL PROTECTED]
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com