On 07/24/2018 05:18 PM, Bernd Edlinger wrote: > On 07/24/18 23:46, Jeff Law wrote: >> On 07/24/2018 01:59 AM, Bernd Edlinger wrote: >>> Hi! >>> >>> This patch makes strlen range computations more conservative. >>> >>> Firstly if there is a visible type cast from type A to B before passing >>> then value to strlen, don't expect the type layout of B to restrict the >>> possible return value range of strlen. >> Why do you think this is the right thing to do? ie, is there language >> in the standards that makes you think the code as it stands today is >> incorrect from a conformance standpoint? Is there a significant body of >> code that is affected in an adverse way by the current code? If so, >> what code? >> >> > > I think if you have an object, of an effective type A say char[100], then > you can cast the address of A to B, say typedef char (*B)[2] for instance > and then to const char *, say for use in strlen. I may be wrong, but I think > that we should at least try not to pick up char[2] from B, but instead > use A for strlen ranges, or leave this range open. Currently the range > info for strlen is [0..1] in this case, even if we see the type cast > in the generic tree. ISTM that you're essentially saying that the cast to const char * destroys any type information we can exploit here. But if that's the case, then I don't think we can even derive a range of [0,99]. What's to say that "A" didn't result from a similar cast of some object that was char[200] that happened out of the scope of what we could see during the strlen range computation?
If that is what you're arguing, then I think there's a re-evaluation that needs to happen WRT strlen range computation/ And just to be clear, I do see this as a significant correctness question. Martin, thoughts? Jeff