pdpi wrote:
On Jul 7, 2:16 am, Steven D'Aprano <st...@remove-this-
cybersource.com.au> wrote:
On Mon, 06 Jul 2009 16:43:43 +0100, Tim Rowe wrote:
2009/7/4 kj <no.em...@please.post>:
Precisely. As I've stated elsewhere, this is an internal helper
function, to be called only a few times under very well-specified
conditions. The assert statements checks that these conditions are as
intended. I.e. they are checks against the module writer's programming
errors.
Good for you. I'm convinced that you have used the assertion
appropriately, and the fact that so many here are unable to see that
looks to me like a good case for teaching the right use of assertions.
For what it's worth, I read assertions at the beginning of a procedure
as part of the specification of the procedure, and I use them there in
order to document the procedure. An assertion in that position is for me
a statement to the user of the procedure "it's your responsibility to
make sure that you never call this procedure in such a way as to violate
these conditions". They're part of a contract, as somebody (maybe you)
pointed out.
As somebody who works in the safety-critical domain, it's refreshing to
see somebody teaching students to think about the circumstances in which
a procedure can legitimately be called. The hostility you've received to
that idea is saddening, and indicative of why there's so much buggy
software out there.
LOL.
Maybe the reason for "so much buggy software" is that people
inappropriately use assert, thus changing the behaviour of code depending
on whether it is run with the -O flag or not.
I don't know what "hostility" you're seeing. The only hostility I'm
seeing is from the OP, which is bizarre considering that he asked for
advice and we gave it. What I see is a bunch of people concerned that the
OP is teaching novices a bad habit, namely, using assert for error
checking. He claims -- angrily and over and over again -- that in his
code, the assertions should never fail. Great. Good for him for knowing
when to use assert. (...)
But he doesn't.
He asserts:
assert lo < hi
but then compares:
sense =mp(func(hi), func(lo))
sense can't ever be anything other than 1. I actually find it amusing
that this threat got to 50 posts of raving discussion about assertions
without anyone spotting that.
That's because the assert and the comparison are unrelated to each
other. If the function is monotonically decreasing, then by definition
lo<hi would guarantee that func(lo)>= func(hi), which would yield a
sense of 0 or -1.
Trivial example of monotonically decreasing:
def func(inp):
return 53.0 - inp
Personally, I think the code is an unreadable mess, but that's mostly
because of all the micro optimizations, not the generality of it.
Here's my unoptimized, but still equally generic, version:
def _binary_search(lo, hi, func, target, epsilon):
sense =mp(func(hi), func(lo))
if sense =0:
return None
guess =lo + hi) / 2.
while abs(func(guess) - target) > epsilon:
guess =lo + hi) / 2.
if func(guess) > target:
hi =uess
elif func(guess) < target:
lo =uess
elif lo =hi:
return None
return guess
And of course your clarified function will fail if the func is
monotonically decreasing.
I still think that rather than using sense in the loop, one should
simply swap lo and hi, and continue.
This is a newbie course, right? A while True loop might be faster, but
it's all sorts of wrong for teaching newbies. Likewise, calculating a
midpoint as mid =hi + lo) * .5 is an aberration in a beginner
environment. You want your students asking why you're calculating an
average, not asking why you're multiplying by 0.5. In the same vein, I
have no words for your target_plus/target_minus cleverness.
The right way to go about it, IMO, is to give them my version, let
them digest it, make all the necessary changes to it to turn it into
your (faster) version. Present benchmarks for both, then let the
readability/performance trade-off sink in. What you achieve with this
exercise is that, instead of making your students think "I have to
read that crud!?", they'll respect that ugly code is often the result
of eking out every last drop of performance from a program as you
possibly can.
(untested)
def _binary_search(lo, hi, func, target, epsilon):
""" lo, hi are floats representing the desired range of input values to
func()
func() is a function that takes a float argument, and returns a float
result
target is the desired result value of func()
epsilon is the allowable error compared to target. If set too small,
this function will fail by returning None
precondition: func is monotonic over the range of floating point inputs from lo to hi
"""
return a float value between lo and hi (inclusive) which yields
approximately target
if func(lo) > func(hi):
lo, hi = hi, lo
if not (func(lo) <= target <= func(hi)):
return None #function doesn't have target value
within the input range
guess = lo
while abs(func(guess) - target) > epsilon:
guess = (lo + hi) / 2.
if func(guess) > target:
hi = guess
elif func(guess) < target:
lo = guess
elif lo == hi:
return None #function is too steep to have a value within
epsilon
return guess
The reason for the "too steep" condition is that with a finite
resolution for 'guess' epsilon could be enough smaller than target as
to make a result impossible. For example, if the slope is 45 degrees,
this happens when epsilon is about 2**51 smaller.
--
http://mail.python.org/mailman/listinfo/python-list