On Fri, Mar 31, 2017 at 6:19 PM Michael Jones <michael.jo...@gmail.com> wrote:
> There is part of the topic that has always been slightly beyond my grasp. > (Maybe I do understand...but just lack absolute certainty.) Maybe others > know the answer... > > In a template system (which is what I prefer but that's not the point of > this email) we have the notion of the TYPE(s) being a formal argument. We > presume that the code will compile or fail based on the suitability of the > instantiated type. That is, a templated Min would fail on the comparison > "<" if the TYPE was "Map[something]something." Call that a lexical fail. > > You can implement a parameterized type as a universal. This means you can take a type, T, as a parameter, but you are specifically not allowed to do anything to T other than to pass it around. In particular, you are not allowed to call any method on T. The notion is useful in a lot of situations. A container of type Map<K, V> can parameterize over V. That is, you can have the type 'forall V . Map<int, V>' for concrete types K (which is 'int' in this instance). Another example is a function like ' length : forall A . []A -> int' which can count the "size" of a slice for any slice type. Once you require additional functions, you must supply what amounts to a signature. That is, you must make sure that whenever you introduce your type parameter T, you also introduce a function, 'less' over that parameter T. Different langauges use different notions, but OCaml uses this one: module type ORD = sig type t val less : t -> t -> bool end We can then implement different modules which "ascribe" to the above module type and "plug them in", specializing our implementation. C++ uses a slightly different notation, but you can achieve the same. Haskell uses a concept called Type Classes, Scala uses Traits and Swift probably uses Protocols. Different notions which lets you acheive what you want. But all of these notions does not verify any property of your 'less' function. We should obviously require that our 'less' function is total, which amounts to checking the following 4 properties: less_refl: forall X . less X X less_anti : forall X Y . less X Y && less Y X => X = Y less_trans : forall X Y Z . less X Y && less Y Z => less X Z less_comp : forall X Y . less X Y || less Y X Some programming languages and proof assistants allows you to require such rules be true for a parameterized type. You are simply, as a programmer, tasked with providing evidence for the truth of the above 4 rules. It does slow down the programmer however, often by a factor of 30 or more. A slightly simpler solution, which I often use, is to write down the above rules as QuickCheck properties and then verify them by trying out random inputs from a generator. For instance by writing: -module(less_total). less(X, Y) -> X < Y. prop_less_trans() -> ?FORALL({X, Y, Z}, {real(), real(), real()}, ?IMPLIES(less(X, Y) andalso less(Y, Z), less(X, Z))). And then running 'eqc:module({testing_time, 30}, less_total)' which gives as many random test cases as possible in a 30 second time frame (usually in the millions). If the 'real()' generator periodically generates infs and nans (it should!) then we can eventually find the problem you mention. In my experience, it is currently feasible to require randomized test cases for your production code, if the code requires high amounts of correctness. Some algorithms, PAXOS-variants I'm looking at you, are almost impossible to get right unless you approach them semi-formally. But requiring such rigor of every piece of code is futile. Not all code requires production quality, and much code is experimental for other reasons. Also, it would severely limit the amount of programmers who could write code---and I think we need more programmers, not less. The key insight is that you are plugging in your T and your Less under the assumption they work. That is, you are looking at your T and Less as being part of your axioms in the system. You can then write your sorter under those axioms. The reason you want to decouple the notion of T and Less from the rest of the code is manyfold: * efficiency: you can plug in a better implementation later * flexibility: you can plug in another implementation later * correctness: your proof is not required to "dig into" the details of the T and Less implementation. So in practice, we are just programming under assumptions about the underlying correctness properties. And because we are rather bad as computer scientists, we don't require the necessary rigor of the parts we are using. Mathematicians are in the same boat, but at least they tend to build upon things which has been proven correct, so they are vastly better off. -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.