Chris Smith wrote: > > Easy, any statically typed language is not latently typed. > > I'm actually not sure I agree with this at all. I believe that > reference values in Java may be said to be latently typed. Practically > all class-based OO > languages are subject to similar consideration, as it turns out.
Quite probably true of GC-ed statically typed languages in general, at least up to a point (and provided you are not using something like a tagless ML implementation). I think Rob is assuming a rather too specific implementation of statically typed languages. > I'm unsure whether to consider explicitly stored array lengths, which > are present in most statically typed languages, to be part of a "type" > in this sense or not. If I understand your position correctly, wouldn't you be pretty much forced to reject the idea of the length of a Java array being part of its type ? If you want to keep the word "type" bound to the idea of static analysis, then -- since Java doesn't perform any size-related static analysis -- the size of a Java array cannot be part of its type. That's assuming that you would want to keep the "type" connected to the actual type analysis performed by the language in question. Perhaps you would prefer to loosen that and consider a different (hypothetical) language (perhaps producing identical bytecode) which does do compile time size analysis. But then you get into an area where you cannot talk of the type of a value (or variable) without relating it to the specific type system under discussion. Personally, I would be quite happy to go there -- I dislike the idea that a value has a specific inherent type. It would be interesting to see what a language designed specifically to support user-defined, pluggable, and perhaps composable, type systems would look like. Presumably the syntax and "base" semantics would be very simple, clean, and unrestricted (like Lisp, Smalltalk, or Forth -- not that I'm convinced that any of those would be ideal for this), with a defined result for any possible sequence of operations. The type-system(s) used for a particular run of the interpreter (or compiler) would effectively reduce the space of possible sequences. For instance, one could have a type system which /only/ forbade dereferencing null, or another with the job of ensuring that mutability restrictions were respected, or a third which implemented access control... But then, I don't see a semantically critically distinction between such space reduction being done at compile time vs. runtime. Doing it at compile time could be seen as an optimisation of sorts (with downsides to do with early binding etc). That's particularly clear if the static analysis is /almost/ able to prove that <some sequence> is legal (by its own rules) but has to make certain assumptions in order to construct the proof. In such a case the compiler might insert a few runtime checks to ensure that it's assumptions were valid, but do most of its checking statically. There would /be/ a distinction between static and dynamic checks in such a system, and it would be an important distinction, but not nearly as important as the distinctions between the different type systems. Indeed I can imagine categorising type systems by /whether/ (or to what extent) a tractable static implementation exists. -- chris P.S Apologies Chris, btw, for dropping out of a conversation we were having on this subject a little while ago -- I've now said everything that I /would/ have said in reply to your last post if I'd got around to it in time... -- http://mail.python.org/mailman/listinfo/python-list