Dear Gregg,

You wrote, "Just think about how one reads a mathematical text - you need not 
actually compute subformulae or even analyze them logically in order to work 
with them." I hate to have to say this, but do you realize that algebra is 
concerned with functions among other things and it is the fact that these 
expressions are functions and not that they are algebraic that gives them this 
property? Functional programming is not a misnomer. It is called functional 
programming because you are quite literally working with functions.

Functions have a profound simplifying effect. The truth is as Haskell has 
demonstrated that much of the complexity in computer programming is artificial 
and doesn't need to be there. It makes having to prove correctness a lot 
easier, but that is not the only motivation behind Haskell and other functional 
programming languages. The push has been getting performance up to a point 
where the language may be regarded as respectable. This has been a problem that 
has dogged artificial intelligence languages. It is easier to get a functional 
language to be efficient compared to a logic language. 

This reason for this is that you get the opportunity to work out more of the 
details as compared to a logic language. In assembler you get to work out every 
blessed detail to your hearts content or until you drop. That is why assembler 
has a reputation for being fast. It is the same reason why functional languages 
are fast comparatively speaking.


From: Gregg Reynolds 
Sent: 10 Thursday December 2009 0939
To: John D. Earle 
Cc: Haskell Cafe 
Subject: Re: [Haskell-cafe] Re: Why?


On Thu, Dec 10, 2009 at 9:13 AM, John D. Earle <johndea...@cox.net> wrote:

  Most of the discussion centers on the benefits of functional programming and 
laziness. Haskell is not merely a lazy functional language. It is a pure lazy 
functional language. I may need to explain what laziness is. Laziness is where 
you work through the logic in its entirely before acting on the result. In 
strict evaluation the logic is worked out in parallel with execution which 
doesn't make complete sense, but it does allow for an architecture that is 
close to the machine. 




Just to roil the waters a bit: no programming language can ever hope to be 
"purely functional", for the simple reason that real computation (i.e. 
computation involving IO, interactivity) cannot be functional.  "Functional 
programming" is an unfortunate misnomer.  On the other hand, languages can be 
algebraic.  The whole point is provability, not function-ness.

More generally:  judging by the many competing proposals addressing the issue 
of how to think formally about real computation (just google stuff like 
hypercomputation, interactive computation, etc.; Jack Copeland has lots of 
interesting stuff on this) is still an open question.  Soare has three 
essential papers on the subject.  I guess the moral of the story is that the 
concepts and the terminology are both still unstable, so lots of terms in 
common use are rather ill-defined and misleading (e.g. functional programming).

Lazyness is just a matter of how one attaches an actual computation to an 
expression; a better term would be something like "delayed evaluation" or 
"just-in-time computation".  You don't have to work through any logic to have 
laziness.  Just think about how one reads a mathematical text - you need not 
actually compute subformulae or even analyze them logically in order to work 
with them.  This applies right down to expressions like "2+3" - one probably 
would compute "5" on reading that, but what about "12324/8353"?  You'd leave 
the computation until you absolutely had to do it - i.e. one would probably try 
to eliminate it algebraically first.

-gregg
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to