Paul Scott wrote: > Well that may date me a little even though I am actively programming at > this moment. I will research this a little more. My logic would be it > would break the rules of the language to assume that conversion.
I don't see how. I see it as a legitimate compiler optimization. If you have "double f = 4;", and you compile 4 as a double-precision value rather than as an int (which would then require an immediate conversion), how could that possibly break a program? Craig