#3 is true for Haskell, it's just that when your function call appears in two
different places, it is counted as two different expressions. Each separate
expression will only be evaluated once, though. This is what is really meant,
since the alternative --- i.e., no function ever being called more than once
for a given set of arguments --- is way too cumbersome to be worth doing in
practice for any language.
Laziness really means that if you have, say,
f x = (g 7) + x
then g 7 need only be evaluated at the first call to f, and then after that it
can be cached. In some circumstances, if we had
f x = (g 7) + x
h x = (g 7) * x
Then maybe the compiler would decide not only to evaluate each (g 7) expression
once, but also that the two expression should be merged into references to a
single shared expression. However, this is not required for laziness; the
only requirement there is that each expression separately only be evaluated
once.
Cheers,
Greg
On Dec 15, 2009, at 9:58 PM, michael rice wrote:
> Hi all,
>
> I think this (#3 below) is where I got the idea:
>
> http://en.wikipedia.org/wiki/Lazy_evaluation
>
> Excerpt:
>
> ---------------
>
> Lazy evaluation refers to how expressions are evaluated when they are passed
> as arguments to functions and entails the following three points:[1]
>
> 1. The expression is only evaluated if the result is required by the
> calling function, called delayed evaluation.[2]
> 2. The expression is only evaluated to the extent that is required by the
> calling function, called Short-circuit evaluation.
> 3. the expression is never evaluated more than once, called
> applicative-order evaluation.[3]
>
> ---------------
>
> So, I guess #3 doesn't apply to Haskell, or maybe I just misunderstood the
> meaning of the statement. I assumed that if f(p) = q (by some calculation)
> then that calculation would be replaced by q so the next time the function
> was called it could just return q, as occurs in memoization.
>
> Michael
>
>
>
> --- On Tue, 12/15/09, Gregory Crosswhite <[email protected]> wrote:
>
> From: Gregory Crosswhite <[email protected]>
> Subject: Re: [Haskell-cafe] Haskell and "memoization"
> To: "Daniel Fischer" <[email protected]>
> Cc: [email protected]
> Date: Tuesday, December 15, 2009, 11:47 PM
>
> Hmm, you raise an
> On Dec 15, 2009, at 8:28 PM, Daniel Fischer wrote:
>
> > Am Mittwoch 16 Dezember 2009 05:08:39 schrieb Gregory Crosswhite:
> >
> > Not even then, necessarily. And it's not always a good idea.
> >
> > f k = [1 .. 20^k]
> >
>
> You raise a really good point here. One can force sharing, as I understand
> it, by using a let clause:
>
> n =
> let xs = f 20
> in length (xs ++ xs)
>
> If I understand correctly, this should cause xs to be first evaluated, and
> then cached until the full length is computed, which in this case is
> obviously undesirable behavior.
>
> Cheers,
> Greg
>
> _______________________________________________
> Haskell-Cafe mailing list
> [email protected]
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe