Hi! > At the function border, yes. But not beyond that.
Not sure what you mean here. If the value is identified as int, it remains int. How strict typing would change that? > Sort-of correct. We don't know if A is true, but we know what > conditions need to exist for A->B to be true. We know that A can be > any value. It could be a float. But that would lead to an error down > the road when the sum is passed to the function typing for an int. Yes, and nothing guarantees you that the error actually does not occur in this code, so you can't derive any additional information here. > Therefore, by knowing $c is passed to a function expecting an int, we > have information about stable states for $a and $b which would result > in a stable state for $c. But that is not related to strict typing - this is a property of +, which does not change with strict typing. > With strict mode, then we can work backwards and see that if $a is set > to 3.5, that we know that it's not a stable state (because at best an > error will be created). I don't see how you can "work backwards". You can, again, reason that if you passed the function call, then $c was int. But you can not derive from that that $c was indeed int, since you don't know that the function call actually worked. In fact, then whole purpose of strict typing is catering for the case where it didn't, so you can not assume such cases do not exist and also propose string typing - it is logically inconsistent. > With coercive typing (weak), you can't work backwards there because > 3.5 + 1.5 is 5.0 which would be acceptable for an int hint. So you > have a LOT more possibilities to contend with. You have the same possibilities in any case. Nothing guarantees you that in fact, in real code, $a and $b is not exactly 3.5 and 1.5. Nothing prevents them from being so. > This isn't so much about proving an application correct as it is about > proving information about the stable (non-error) states of the > program. If you just ignore any cases when program may malfunction, what use such analyzer would be for any real program - which with probability close to 1 has states in which it malfunctions? I still don't see how strict typing adds any information here. > So with coercive typing (weak), the ability to detect errors and prove > stable states of the application is reduced and requires significantly > more computation. I still don't see how. You only say that if we assume that the program never fails, then we can derive more information from strict typing, but if we could write programs that never fail we didn't have any need in strict typing at the first place! In compiled languages, this problem is solved by pre-resolving the types of *all* variables prior to execution, but it is not possible in PHP, so I don't see where the reduction comes from. > I'm talking about using the static analyzer to prove properties of the > stable (non-error) state. Yes, at runtime you only know things about > $a after the expression. But at static time, we can prove based on > what we know of $c, the acceptable input states that could possibly > lead to a stable state. Yes, you can generate errors. But the entire > point of static analysis is to detect the possibility of these errors > ahead of time rather than at runtime. I still don't see how it is a useful thing. You are saying that by inserting a code in your program somewhere that says "if(!is_int($a)) { exit(0);} " you magically improve your knowledge about your program *before* this code line. This seems to be a baseless claim, and quite strange - whatever post-conditions you declare, it can not go back in time and make the code before the post-condition somehow change or make your knowledge about this code somehow better just because you have added some code at the end. > But we're talking about much more than the behavior of one operation. > We're talking about the behavior of the entire set of function calls > around it (and the flow of type information based on these predictable > operations). So while + itself has no advantage from strict types, > passing the result to a function expecting an integer does give you a > lot of information about the non-error possibilities that the + > operation can have. The whole point is that these operations are not predictable. What you're doing here is saying "I don't care about hard cases where program malfunctions" and then define any unexpected input as a hard case. With this definition, of course, it is very easy to reason about the remaining cases, only this reasoning is useless since it does not cover what happens in real program run. That's like saying if you have code that says $a = $b/$c then you can derive from mere existence of such code that $c is non-zero, since otherwise the program fails. But of course you can't derive that $c is non-zero only because you wrote a code that relies on it! Otherwise no debugging would be ever necessary :) > It's not about how data gets in, it's about how data moves once it's > in. It's about knowing how types change and flow into other functions I'm sorry, I don't see how the data moves differently in strict model than in weak model. The only difference happens when passing the function call, afterwards everything is the same, so if you say "it's not about how data gets in" then there's no "it", since the only difference between the models is exactly when the data gets in! > that's important. Because that lets you determine more data about the > stable (non-error) state of the application. That claim is repeated again but not proved so far. Unless we accept the (non-error) qualifier which makes the data useless, as if we assume our program never fails we don't need any analysis at all - we just can read the spec! The whole point of analysis is (usually) finding where the program may fail! -- Stas Malyshev smalys...@gmail.com -- PHP Internals - PHP Runtime Development Mailing List To unsubscribe, visit: http://www.php.net/unsub.php