On Fri, Oct 30, 2020 at 5:16 PM Julio Di Egidio <ju...@diegidio.name> wrote: > Not to mention, from the point of view of formal verification, > this is the corresponding annotated version, and it is in fact > worse than useless: > > def abs(x: Any) -> Any: > ...some code here... >
Useless because, in the absence of annotations or other information saying otherwise, "Any" is exactly what you'd get. There's no point whatsoever in annotating that it ... is exactly what it would be without annotations. Unless you're trying for some sort of rule "all functions must be annotated", or something, it's best to just not bother. > Useless as in plain incorrect: functions written in a totally > unconstrained style are rather pretty much guaranteed not to > accept arbitrary input... and, what's really worse, to be > unsound on part of the input that they do accept: read > undefined behaviour. Not quite true. You're assuming that the annotation is the only thing which determines what a function will accept. Consider that the function had been written thus: def abs(x): """Calculate the magnitude of a (possibly complex) number""" return (x.real ** 2 + x.imag ** 2) ** 0.5 The docstring clearly states that it expects a number. But ANY number will do, so long as it has .real and .imag attributes. Python's core types all have those; the numpy int, float, and complex types all have those; any well-behaved numerical type in Python will have these attributes. In fact, if you were to annotate this as "x: numbers.Complex", it would give exactly zero more information. This function requires a number with real and imag attributes - that's all. If anything, such an annotation would confuse people into thinking that it *has* to receive a complex number, which isn't true. There is nothing undefined here. If you give it (say) a string, you get a nice easy AttributeError saying that it needs to have a .real attribute. > > Python doesn't push for heavy type checking because it's almost > > certainly not necessary in the majority of cases. Just use the thing > > as you expect it to be, and if there's a problem, your caller can > > handle it. That's why we have call stacks. > > I am learning Pandas and I can rather assure you that it is an > absolute pain as well as loss of productivity that, whenever I > misuse a function (in spite of reading the docs), I indeed get > a massive stack trace down to the private core, and have to > figure out, sometimes by looking at the source code, what I > actually did wrong and what I should do instead. To the point > that, since I want to become a proficient user, I am ending up > reverse engineering the whole thing... Now, as I said up-thread, > I am not complaining as the whole ecosystem is pretty young, > but here the point is: "by my book" that code is simply called > not production level. > Part of the art of reading exception tracebacks rapidly is locating which part is in your code and which part is library code. Most of the time, the bug is at the boundary between those two. There are tools that can help you with this, or you can develop the skill of quickly eyeballing a raw traceback and finding the part that's relevant to you. I don't think this is a consequence of the ecosystem being young (Pandas has been around for twelve years now, according to Wikipedia), but more that the Python community doesn't like to destroy good debugging information and the ability to fully duck-type. Solve the correct problem. If tracebacks are too hard to read, solve the traceback problem, don't try to force everything through type checks. ChrisA -- https://mail.python.org/mailman/listinfo/python-list