Re: [Python-Dev] Adventures with Decimal

2005-05-23 Thread Greg Ewing
Raymond Hettinger wrote:
> Did you see Mike Cowlishaw's posting where he described why he took our
> current position (wysiwig input) in the spec, in Java's BigDecimal, and
> in Rexx's numeric model?

Yes, it appears that you have channeled him correctly
on that point, and Tim hasn't. :-)

But I also found it interesting that, while the spec
requires the existence of a context for each operation,
it apparently *doesn't* mandate that it must be kept
in a global variable, which is the part that makes me
uncomfortable.

Was there any debate about this choice when the Decimal
module was being designed? It seems to go against
EIBTI, and even against Mr. Cowlishaw's own desire
for WYSIWIG, because WYG depends not only on what
you can see, but a piece of hidden state as well.

Greg

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-23 Thread Nick Coghlan
Raymond Hettinger wrote:
>>Py> decimal.Decimal("a", context)
>>Decimal("NaN")
>>
>>I'm tempted to suggest deprecating the feature, and say if you want
>>invalid
>>strings to produce NaN, use the create_decimal() method of Context
>>objects.
> 
> 
> The standard does require a NaN to be produced.

In that case, I'd prefer to see the behaviour of the Decimal constructor 
(InvalidOperation exception, or NaN result) always governed by the current 
context.

If you want to use a different context (either to limit the precision, or to 
alter the way malformed strings are handled), you invoke creation via that 
context, not via the standard constructor.

> Unless something is shown to be wrong with the current implementation, I
> don't think we should be in a hurry to make a post-release change.

The fact that the BDFL (and others, me included) were at least temporarily 
confused by the ability to pass a context in to the constructor suggests there 
is an interface problem here.

The thing that appears to be confusing is that you *can* pass a context in to 
the Decimal constructor, but that context is then almost completely ignored. It 
gives me TOOWTDI concerns,  even though passing the context to the constructor 
does, in fact, differ slightly from using the create_decimal() method (the 
former does not apply the precision, as Guido discovered).

Cheers,
NIck.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-23 Thread Michael Chermside
I'd like to respond to a few people, I'll start with Greg Ewing:

Greg writes:
> I don't see how it
> helps significantly to have just the very first
> step -- turning the input into numbers -- be
> exempt from this behaviour. If anything, people
> are going to be even more confused. "But it
> can obviously cope with 1.101,
> so why does it give the wrong answer when I add
> something to it?"

As I see it, there is a meaningful distinction between constructing
Decimal instances and performing arithmatic with them. I even think
this distinction is easy to explain to users, even beginners. See,
it's all about the program "doing what you tell it to".

If you type in this:
x = decimal.Decimal("1.13")
as a literal in your program, then you clearly intended for that
last decimal place to mean something. By contrast, if you were to
try passing a float to the Decimal constructor, it would raise an
exception expressly to protect users from "accidently" entering
something slightly off from what they meant.

On the other hand, in Python, if you type this:
z = x + y
then what it does is completely dependent on the types of x and y.
In the case of Decimal objects, it performs a "perfect" arithmetic
operation then rounds to the current precision.

The simple explanation for users is "Context affects *operations*,
but not *instances*." This explains the behavior of operations, of
constructors, and also explains the fact that changing precision
doesn't affect the precision of existing instances. And it's only
6 words long.

> But I also found it interesting that, while the spec
> requires the existence of a context for each operation,
> it apparently *doesn't* mandate that it must be kept
> in a global variable, which is the part that makes me
> uncomfortable.
>
> Was there any debate about this choice when the Decimal
> module was being designed?

It shouldn't make you uncomfortable. Storing something in a global
variable is a BAD idea... it is just begging for threads to mess
each other up. The decimal module avoided this by storing a SEPARATE
context for each thread, so different threads won't interfere with
each other. And there *is* a means for easy access to the context
objects... decimal.getcontext().

Yes, it was debated, and the debate led to changing from a global
variable to the existing arrangement.

--
As long as I'm writing, let me echo Nick Coghlan's point:
> The fact that the BDFL (and others, me included) were at least temporarily
> confused by the ability to pass a context in to the constructor suggests there
> is an interface problem here.
>
> The thing that appears to be confusing is that you *can* pass a context in to
> the Decimal constructor, but that context is then almost completely ignored.

Yeah... I agree. If you provide a Context, it should be used. I favor changing
the behavior of the constructor as follows:

 def Decimal(data, context=None):
 result = Existing_Version_Of_Decimal(data)
 if context is None:
 result = +result
 return result

In other words, make FULL use of the context in the constructor if a context
is provided, but make NO use of the thread context when no context is
provided.

--
One final point... Thanks to Mike Cowlishaw for chiming in with a detailed
and well-considered explanation of his thoughts on the matter.

-- Michael Chermside

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 344: Explicit vs. Implicit Chaining

2005-05-23 Thread Michael Chermside
James Knight writes:
> I still don't see why people think the python interpreter should be
> automatically providing __context__. To me it seems like it'll just
> clutter things up for no good reason. If you really want the other
> exception, you can access it via the local variable in the frame
> where it was first caught.

No you can't, because you didn't know the second exception was
going to happen! I write something like this:

db_connection = get_db_connection()
try:
do_some_stuff(db_connection)
except DatabaseException, err:
log_the_problem(err)
cleanup(db_connection)

If something goes wrong inside of do_some_stuff, I enter the
exception handler. But then if an error occurs within
log_the_problem() or cleanup(), then I lose the original exception.
It's just GONE. I didn't expect log_the_problem() or cleanup() to
fail, but sometimes things DO fail.

An example of this happens to me in in Java (which has the same
problem. I have code like this:

db_connection = get_db_connection()
try:
do_some_stuff(db_connection)
finally:
db_connection.close()

For instance, when I want to do
unit testing, I create a mock database connection that raises
an exception if you don't use it as the test expects. So I get
exceptions like this all the time:

Error: did not expect call to "close()"

Of course, what REALLY happened was that we tried to update a row
that didn't exist, which created an exception:

Error: tried to update row with key "100", but it does not exist.

But then it entered the finally clause, and tried to close the
connection. That wasn't expected either, and the new exception
replaces the old one... and we lose information about what REALLY
caused the problem.

In Java, I had to fix this by making my mock objects very smart.
They have to keep track of whether any problem has occurred during
this test (in any of the cooperating mock objects) and if so, then
they have to re-report the original problem whenever something new
goes wrong. This is the only way I've found to work around the
problem in Java. Wouldn't it be nice if Python could do better?

-- Michael Chermside
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] __trace__? (was Re: PEP 344: Explicit vs. Implicit Chaining

2005-05-23 Thread Phillip J. Eby
At 10:33 AM 5/21/2005 -0400, James Y Knight wrote:
>On May 20, 2005, at 6:37 PM, Phillip J. Eby wrote:
> > This only helps if you can get to a debugger.  What if you're
> > reading your web server's error log?
>
>Then you're in trouble anyways because you need the contents of some
>local to figure out what's going on, also.

Actually, this reminds me of something...  I've often found that tracebacks 
listing the source code are less than informative for developers using a 
library.  I've been thinking about creating a traceback formatter that 
would instead display more useful trace information, but not the 
super-verbose information dumped by cgitb, nor the cryptic and wasteful 
__traceback_info__ of Zope.

Specifically, I was thinking I would have statements like this:

 __trace__ = "Computing the value of %(attrName)s"

embedded in library code.  The traceback formatter would check each frame 
for a local named __trace__, and if present, use it as a format to display 
the frame's locals.  This information would replace only the source code 
line, so you'd still get line and file information in the traceback, but 
you'd see a summary of what that code was currently doing.  (If trying to 
format the trace information produces an error, the formatter should fall 
back to displaying the source line, and perhaps emit some information about 
the broken __trace__ -- maybe just display the original __trace__ string.)

As long as we're proposing traceback formatting enhancements, I'd like to 
suggest this one.  A sufficiently smart compiler+runtime could also 
probably optimize away __trace__ assignments, replacing them with a table 
similar to co_lnotab, but even without such a compiler, a __trace__ 
assignment is just a LOAD_CONST and STORE_FAST; not much overhead at all.

Anyway, judicious use of __trace__ in library code (including the standard 
library) would make tracebacks much more comprehensible.  You can think of 
them as docstrings for errors.  :)

Interestingly, you could perhaps implement context exceptions in terms of 
__trace__, e.g.:

 try:
 doSomething()
 except Exception, v:
 tb = v.__traceback__
 __trace__ = "Handling exception:\n%(v)s\n%(tb)s"
 # etc.

So, you'd get the formatting of the context exception embedded in the 
traceback of the error in the handler.

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adventures with Decimal

2005-05-23 Thread Aahz
On Mon, May 23, 2005, Greg Ewing wrote:
>
> But I also found it interesting that, while the spec requires the
> existence of a context for each operation, it apparently *doesn't*
> mandate that it must be kept in a global variable, which is the part
> that makes me uncomfortable.
>
> Was there any debate about this choice when the Decimal module was
> being designed?

Absolutely.  First of all, as Michael Chermside pointed out, it's
actually thread-local.  But even without that, we were still prepared to
release Decimal with global context.  Look at Java: you have to specify
the context manually with every operation.  It was a critical design
criterion for Python that this be legal::

>>> x = Decimal('1.2')
>>> y = Decimal('1.4')
>>> x*y
Decimal("1.68")

IOW, constructing Decimal instances might be a bit painful, but *using*
them would be utterly simple.
-- 
Aahz ([EMAIL PROTECTED])   <*> http://www.pythoncraft.com/

"The only problem with Microsoft is they just have no taste." --Steve Jobs
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] AST manipulation and source code generation

2005-05-23 Thread Ka-Ping Yee
Would there be any interest in extending the compiler package with tools
for AST transformations and for emitting Python source code from ASTs?

I was experimenting with possible translations for exception chaining
and wanted to run some automated tests, so i started playing around
with the compiler package to do source-to-source transformations.
Then i started working on a way to do template-based substitution of
ASTs and a way to spit source code back out, and i'm wondering if
that might be good for experimenting with future Python features.

(If there's already stuff out there for doing this, let me know --
i don't intend to duplicate existing work.)


-- ?!ng
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com