On 1/22/19 11:08 PM, George Neuner wrote: > > On 1/22/2019 2:31 PM, Zelphir Kaltstahl wrote: >> If the terms procedures and functions in computing have no >> significant difference, then why use two terms for the same thing, of >> which one is already used in mathematics, enabling confusion to appear? >> >> This would make a fine argument for not using the word "function" for >> computing at all and keep to using the term "procedure" all the way. > > That's what many older languages did - Fortran and Lisp being notable > exceptions, but they can be forgiven because they were the first > (abstraction able) programming languages, and their early users were > people who knew and understood the difference between computing and > mathematics. > > In the world of syntax design, wholesale use of the term "function" - > *relatively* - is a new thing.
Hmmm. In general I have to say, I tend to "vote" for using the terms, that are most precise, even if people do not understand, in hope of making them wonder and maybe ask themselves, why someone would use that term instead of another: "Why are they talking about procedures all the time, instead of functions? Is there a difference?" and then maybe they will find out or ask. Also to not contribute to making people think it is the same as mathematical functions. I think using the term "function" rather hides that and does not inspire thinking about it. For example the recorded SICP lecture also made me more conscious about my usage of the word "function". So did some other FP articles and language tutorials. I ask myself: Shouldn't we strive to understand just as much as people before us, even if that means getting used to using their terminology, since they knew what they were talking about? >> I disagree on one more point. It is not necessary to always remember >> the low level character of code running on a machine, if the language >> we are using abstracts it well and guarantees us, that there will not >> be strange effects in all the relevant cases for our programs. > > But a language CAN'T guarantee that there will not be effects - > "strange" or otherwise. Spectre attacks work even against code that > is correct WRT its programming language because languages either are > ignorant of - or are intentionally ignoring - side effects of those > fiddly little machine operations. That is a connection I do not fully understand. Spectre attack is something designed to break the system, or out of previously known bounds of what some code could do. Can such a thing happen unintentionally, when we write useful / constructive / productive / well meaning programs? I guess theoretically, but very very unlikely? I get that there is a chance of something having a strange effect, when the CPU is this kind of buggy (Intel cutting corners, still waiting for my free of charge sent replacement CPU, free from such bugs, being inserted in my machines for me by Intel :D), but that problem is on a lower layer of abstraction. If the language we use is in itself correct, isn't it then up to the lower layers to get things right? Afaik Spectre (or was that Meltdown?) are only possible, because of over optimistic branch prediction stuff (not an expert on such low level details at all, just from what I read online). Couldn't we construct hardware free of such mistakes? Or is there something fundamental, that will not allow that, so that we cannot ever have a language guaranteeing things? >> What I am relating to is: >> >> > The computer abstraction of "applying a function" can only be >> stretched >> > so far. Those fiddly little "steps" that approximate the function >> can't >> > legitimately be ignored: they consumed time and energy, and [barring >> > bugs] in the end they gave you only an approximation - not an answer >> >> Yes it can only be stretched that far (not infinite resources, >> precision etc.), however, there are many situations, where I think >> the stepwise nature can be legitimately ignored. I would say, if a >> language is well designed and the interpreter / compiler and things >> in between that run it are proven to be correct, I do not see, why we >> should not ignore the stepwise nature of code execution. Why would it >> be useful to look at that, provided our program is sufficiently >> performant and does what we want? Furthermore they often do give me >> an answer. It may not always be the case for numbers, because of >> precision and such things, but think of things like joining together >> some strings, some constructs well defined in the language we use and >> the result of such procedure call would be an answer and not only an >> approximation. >> >> I don't think it's a good idea to conflate the meaning of "function", >> from my current level of experience at least ;) > > The problem is that there are far more situations where the > abstraction leaks (or fails utterly) than there are situations where > the abstraction holds. That's why programming should be considered > engineering rather than science. [In truth, I think programming is > more an art than an engineering discipline, but that's a different > discussion.] OK, granted, it is easier to write something that has some side effects or somehow leaks the "mathematical function" abstraction (why call it function then?! :D). I also think computer programming is all of those things you mentioned, science, engineering and art : ) > As you noted, a major problem area is numerics. But it's more of a > problem than you realize. The average programmer today has had no > math beyond high school, and consequently most uses of floating point > are unstable and buggy. There is far too much code that works by > happenstance rather than by careful design. I will definitely believe that! Although I have to admit: Those floating point numbers _are_ tricky to handle. However, maybe this awareness in itself is worth something. > In the past, it was true that more programs worked with strings (or > symbols) than with numbers, but that no longer is the case. Today, > numerically dominated programs vastly outnumber symbolically dominated > ones. > > There are quite a few CS scholars who think the default for safe > programming languages should be arbitrary precision, decimal > arithmetic. Rational arithmetic in Racket (and Scheme and Lisp) > similarly is safe, but the fractional representation is off-putting to > many people. Something more like BigFloat would be a more palatable > choice for the masses. I actually like the way Racket and some other languages keep it rationals until imprecise numbers are needed or asked for. Why lose precision when not being asked to. They are great! Fan of simply having rationals in a language! When I have to use other languages I often wish I had rational numbers like I do in Racket. One source of difficult to track down errors less, if you ask me. Thanks for the interesting points mentioned. -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.