Paul Rubin schrieb: > "Diez B. Roggisch" <[EMAIL PROTECTED]> writes: >> For example, SPARK does not support dynamic allocation of memory so >> things such as pointers and heap variables are not supported. > > Right, Spark uses a profile intended for embedded systems, so > no unpredictable gc delays etc. > >> Which is not to say that trivial code couldn't have errors, and if >> it's extremely cruical code, it's important that it hasn't errors. But >> all I can see is that you can create trustable, simple sub-systems in >> such a language. And by building upon them, you can ensure that at >> least these won't fail. > > Yes, that's the usual approach. > >> But to stick with the example: if the path-planning of the UAV that >> involves tracking a not before-known number of enemy aircrafts steers >> the UAV into the ground, no proven-not-to-fail radius calculation will >> help you. > > Whether the program uses dynamic memory allocation or not, there > either has to be some maximum number of targets that it can be > required to track, or else it's subject to out-of-memory errors. > If a maximum number is specified, then memory requirements can > be computed from that.
In other words: you pre-allocate everything and can then proove what - that the program is going to be stop working? What does that buy you - where is "I'm crashed becaus I ran out of memory trying to evade the seventh mig" better than "sorry, you will be shot down because I'm not capable of processing more enemie fighters - but hey, at least I'm still here to tell you!" Sorry, but if that's supposed to make a point for static typechecking, I don't buy it. Crippling the execution model so that it fits a proving-scheme which essentially says "don't do anything complex with me" is no good. Diez -- http://mail.python.org/mailman/listinfo/python-list