Timur Tabi wrote:
On Fri, Apr 30, 2010 at 11:22 AM, Scott Wood <scottw...@freescale.com> wrote:

That's what I meant.  Actually, I think it's ULL.  Regardless, I think
the compiler will see the  "1000000000 ... * 1000" and just combine
them together.  You're not actually outsmarting the compiler.
The compiler will do no such thing.  That's a valid transformation when
doing pure math, but not when working with integers.

I ran some tests, and it appears you're right.  I doesn't make a lot
of sense to me, but whatever.

However, "(1000000000 / pixclock) * 1000" produces a result that's
less accurate than "1000000000000ULL / pixclock".

Precisely, that's what makes it a distinct computation -- as far as the compiler knows, it could be intentional. Plus, turning it into 64-bit math would invoke a library call for 64-bit division, which wouldn't be much of an optimization anyway.

The question is whether the loss of accuracy matters in this case.

    err = -1;

because he wanted it to be the largest possible integer.
-1 is not the largest possible integer.  LONG_MAX, perhaps?

What, you don't like implicit casting of -1 to an unsigned? :-)

I like it even less when the variable is signed and it's still supposed to be larger than positive numbers. :-)

-Scott

_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to