[EMAIL PROTECTED] wrote: > but in any case, I believe there are several reasons why type > inference scalability is actually not _that_ important (as long as it > works and doesn't take infinite time): > > -I don't think we want to do type inference on large Python programs. > this is indeed asking for problems, and it is not such a bad approach > to only compile critical parts of programs (why would we want to > compile PyQt code, for example.) I do think type inference scales well > enough to analyze arbitrary programs of up to, say, 5,000 lines. I'm > not there yet with Shed Skin, but I don't think it's that far away (of > course I'll need to prove this now :-)) > > -type inference can be assisted by profiling
Something else worth trying: type inference for separately compiled modules using the test cases for the modules. One big problem with compile-time type inference is what to do about separate compilation, where you have to make decisions without seeing the whole program. An answer to this is to optimize for the module's test cases. If the test cases always use an integer value for a parameter, generate hard code for the case where that variable is a integer. As long as there's some way to back down, at link time, to a more general but slower version, programs will still run. If the test cases reflect normal use cases for the module, this should lead to generation of reasonable library module code cases. > besides, (as john points out I think), there is a difference between > analyzing an actual dynamic language and a essentially static language > (such as the Python subset that Shed Skin accepts). it allows one to > make certain assumptions that make type inference easier. Yes. And, more than that, most programs have relatively simple type behavior for most variables. The exotic stuff just doesn't happen that often. John Nagle -- http://mail.python.org/mailman/listinfo/python-list