Hello! First, thank you for the nice explanation in my absence. I couldn't have done it better. Let me just explain where I see things a little differently.
Tobia Conforto <tobia.confo...@gmail.com> writes: > Even the classic example of a fork (+/÷≡) is harder to read than its > functional version {(+/⍵)÷≡⍵} and it goes downhill from there with > longer trains. But maybe it's just me being unfamiliar with the > syntax. Most of my experience with tacit programming comes from Haskell. When I was first confronted with it, it was a but confusing, and I had the impression that it led to obfuscation. But as with every tool, it depends on how you use it. There are, in my opinion, situations where the point-free style helps thinking in terms of data flow, looking at code as pipelines. I have worked on a few university projects (NLP dependency parser, tagger, text classification using a new approach etc.) collaborating with a few fellow students, where we used this style in Haskell without having problems understanding each other's code. We agreed, that it helped in improving readability. There is, as you know, more to it, than hooks and forks. To start with simple things, let's just compare two summing functions: sum1 ← {+/⍵} sum2 ← +/ I don't see how the first version should be more readable than the second. Introducing the lambda and the point ⍵ is redundant and in a way hides that all we want is just to give a name to the function +/. (Whether one would even want to name such a trivial function in APL left aside, I just pick the most basic examples for now.) Sure the first one does the job, but isn't the second one clearer? Isn't the lambda just boiler plate? I, for one, prefer the eta-reduced version. Having not much experience with trains in APL, but having used an analogue of forks in Haskell (using an explicit operator) from time to time (not very often, only when it actually fit in), I assume that one starts to see these patterns after getting used to them. avg ← {(+/⍵)÷≢⍵} avg ← +/÷≢ Now, is the second version an improvement? At the moment, with my limited experience in APL and trains, I'm not sure. Intuitively, I don't really like the fact that there is no explicit operator used to clearly indicate what's happening. Then again, intuitions change with experience, and I can imagine getting used to the pattern, as long as it's used sensibly. The above mentioned eta-reduction, taken to extremes, can lead to obfuscated code. Transforming perfectly fine code (even using a command line tool aptly named pointfree) into unreadable mess is a pasttime of some Haskellers, just like the dreaded one-liners are for some APLers. But to give an impression how point-free style can actually help readability with less trivial examples, I would like to show some Haskell code, since that's where my experience with that style comes from: readDoubles = map (fst . fromMaybe (error "blah!") . readDouble) . S.words The separate dots are function composition. This should be pretty transparent, even without Haskell knowledge. It's just a pipeline transforming data, which splits into words, reads a double from them if possible (i.e. maybe return a double), and complains if it failed. Function composition is just the welding. (Shortened the error message here.) Ignore the fst, it just gets rid of the remaining string from the double parse. You don't have to understand every detail, I just want to give an impression of how such code looks, and that the gist of it is easy to grasp. Now, let's introduce points: readDoubles str = map (fst . fromMaybe (error "blah!") . readDouble) (S.words str) Introducing the argument explicitly didn't add any relevant information. That the code would read from a string was clear by its name, the functionality of S.words and its type. Let's go on ("\" is lambda in Haskell, so \x -> x is just {⍵}): readDoubles str = map ((\(number, restOfString) -> number) . (\parseResult -> fromMaybe (error "blah!") parseResult) . (\word -> readDouble word)) (S.words str) Now, after we introduced the lambdas, did we gain anything from it? Sure, the arguments are now named, but wasn't the code already clear in the first place? That a function called S.words would return words is not that hard to guess, etc. If anything, for non Haskellers, I even imagine it to be harder to read. Besides, APL lambdas wouldn't even allow us to give meaningful names here. But, of course, I cheated. Nobody would ever do it this way, but just put the mapping function in a lambda and convert composition to application: readDoubles str = map (\word -> fst (fromMaybe (error "blah!") (readDouble word))) (S.words str) Most Haskellers would get rid of the nested parentheses and probably write it as: readDoubles str = map (\word -> fst $ fromMaybe (error "blah!") $ readDouble word) (S.words str) Now, that's not so bad, at least in terms of length. But reading and writing this code puts a bit less emphasis on thinking in terms of a data transformation pipeline assembled from functional building blocks. You think about it more in terms of given data and applications. "Take a string and split it. Then map a function that takes the words, applies a function that reads doubles from them. Then, complain if something goes wrong and discard the rest." The difference may seem subtle, and maybe it is, but I came to like the more pipeline oriented approach. Excuse my use of Haskell examples here. (Btw. they are untested except the first, which came from actual code in one of the projects.) I'm not exactly sure how much of my experience with it will correspond to APL, but it should explain why I don't share your opinion that point free style is generally pointless. > Two juxtaposed functions (f g) are called a "hook" and if I'm not > mistaken, they behave like the classical jot composition f∘g which in > most interpreters means {⍺ f g⍵}. Compare it to "hoof" composition f⍥g > as implemented in Nars2000 (or "paw" f⍤g in Sharp APL) which means > {(g⍺) f g⍵}, leaving aside considerations about rank, which complicate > matters. For completeness, "hoof" f⍥g in Sharp APL means something > else entirely: {f ⍺g⍵}. Notice how the explicit functional syntax > {...} is always the clearest and less ambiguous one. Actually, I think that at least the examples using explicit operators are pretty clear if their definitions are known. To me, {⍺ f g⍵} isn't more clear than f∘g. It just introduces redundant points that are not named descriptively anyway (even if that were possible in APL, its often hard to find descriptive names for such arguments). Besides, function composition is a familiar notion, used in mathematics and other programming languages and in my opinion worth its own operator. > Three functions (f g h) are called a "fork" and behave as {(⍺f⍵) g > ⍺h⍵}. Of all these, the fork is the only one that has a basis in > traditional usage, where functions are sometimes applied between other > functions in a kind of "shorthand" to yield new ones: f + g > traditionally means the function {(⍺f⍵) + ⍺g⍵}. This isn't necessarily > a good thing though, given that APL was invented to overcome the > idiosyncrasies of traditional math notation. That seems to be a valid point, but as said above, I don't want to commit myself to a strong opinion about this, until I have some more experience with trains under my belt. As said above, I also agree fully with everything else you said. Support for tacit programming might be the icing on the cake for some (others may hate it), but proper lambdas are a pretty essential feature. Regards, Daniel