On Thu, 05 Jul 2018 16:09:52 +0200, Antoon Pardon wrote: >> This is not an innovation of Mypy. It's how type inference is supposed >> to work. If a particular type checker doesn't do that, it is doing it >> wrong. > > That is how type interference works in languages that have some kind of > static typing like Haskell.
No, that's how type inference works in dynamically typed languages like Python as well. See Mypy for an example. Do you have a counter-example? I'm happy to be corrected by facts, but not by suppositions about hypothetical type-checkers which might someday exist but don't yet. > Sure if mypy want to impose something like > that it can, but you can't interfere something like that for python > programs in general. Naturally -- that's why type-checking remains opt-in. Nobody, especially not me, says that if you see x = 3 in a random Python program, that means that x must be an int everywhere in that program. That would be a foolish thing to say. But if you opt-in to using a type-checker, the assumption is that you are writing in a less-dynamic, more-static style that will actually get some advantage from static type checking. If you're not doing so, then you're just annoying yourself and wasting time and you won't enjoy the experience one bit. As many people -- including you, if I remember correctly -- have often pointed out, the amount of dynamism allowed in Python is a lot more than the amount typically used by most programs. Most of us, most of the time, treat variables as mostly statically typed. Reusing a variable for something completely unrelated is mildly frowned on, and in practice, most variables are only used for *one* thing during their lifetime. Python remains a fundamentally dynamically typed language, so a static type checker (whether it has type inference or not) cannot cope. It requires gradual typing: a type system capable of dealing with mixed code with parts written in a static-like style and parts that remain fully dynamic. Mypy makes pretty conservative decisions about which parts of your code to check. (E.g. it treats functions as duck-typed unless you explicitly annotate them.) But when it does infer types, it cannot always infer the type you, the programmer, intended. Often because there is insufficient information available: x = [] # we can infer that x is a list, but a list of what? x.append("hello world") # allowed, or a type error? And if the type checker turns out to make an overly-strict inference, and you want to overrule it, you can always annotate it as Any. Type checking algorithms, whether static or gradual, are predicated on the assumption that the type of variable is consistent over the lifetime of that variable. (By type, I include unions, optional types, etc. "A float or a string" is considered a type for these purposes. So is "anything at all".) That's the only reasonable assumption to make for static analysis. I'll grant you that a sufficiently clever type checking algorithm could track changes through time. Proof of concept is that we humans can do so, albeit imperfectly and often poorly, so it's certainly possible. But how we do it is by more-or-less running a simulated Python interpreter in our brain, tracking the types of variables as they change. So *in principle* there could be a type-checker which does that, but as far as I know, there isn't such a beast. As far as I know, nobody has a clue how to do so with any sort of reliability, and I doubt that it is possible since you would surely need runtime information not available to a source-code static analyser. We humans do it by focusing only on a small part of the program at once, and using heuristics to guess what the likely runtime information could be. The fact that we're not very good at it is provable by the number of bugs we let slip in. So the bottom line is, any *practical* static type checker has to assume that a single variable is always the same type during its lifetime, and treat any assignment to another type as an error unless told differently. (If you have counter-examples, I'm happy to learn better.) By the way, you keep writing "interfere", the word is "infer": infer v 1: reason by deduction; establish by deduction [syn: deduce, infer, deduct, derive] 2: draw from specific cases for more general cases [syn: generalize, generalise, extrapolate, infer] 3: conclude by reasoning; in logic [syn: deduce, infer] > Unless you mean "A probable guess" by interfere. No. -- Steven D'Aprano "Ever since I learned about confirmation bias, I've been seeing it everywhere." -- Jon Ronson -- https://mail.python.org/mailman/listinfo/python-list