In article <[EMAIL PROTECTED]>,
 Paul Rubin <http://[EMAIL PROTECTED]> wrote:

> "Raymond Hettinger" <[EMAIL PROTECTED]> writes:
> > When writing a large suite, you quick come to appreciate being able
> > to use assert statements with regular comparision operators, debugging
> > with normal print statements, and not writing self.assertEqual over and
> > over again.  The generative tests are especially nice.
> 
> But assert statements vanish when you turn on the optimizer.  If
> you're going to run your application with the optimizer turned on, I
> certainly hope you run your regression tests with the optimizer on.

That's an interesting thought.  In something like C++, I would never think 
of shipping anything other than the exact binary I had tested ("test what 
you ship, ship what you test").  It's relatively common for turning on 
optimization to break something in mysterious ways in C or C++.  This is 
both because many compilers have buggy optimizers, and because many 
programmers are sloppy about depending on uninitialized values.

But, with something like Python (i.e. high-level interpreter), I've always 
assumed that turning optimization on or off would be a much safer 
operation.  It never would have occurred to me that I would need to test 
with optimization turned on and off.  Is my faith in optimization misguided?

Of course, all of the Python I write is for internal use; I haven't yet 
been able to convince an employer that we should be shipping Python to 
customers.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to