On Wednesday, 9 July 2014 at 17:13:21 UTC, H. S. Teoh via
Digitalmars-d-learn wrote:
On Wed, Jul 09, 2014 at 04:24:38PM +0000, Dominikus Dittes
Scherkl via Digitalmars-d-learn wrote:
/// Returns -1 if a < b, 0 if they are equal or 1 if a > b.
/// this will always yield a correct result, no matter which
numeric types are compared.
/// It uses one extra comparison operation if and only if
/// one type is signed and the other unsigned but the signed
value is >= 0
/// (that is what you need to pay for stupid choice of type).
[...]
Yeah, I don't see what's the problem with comparing signed and
unsigned
values, as long as the result is as expected. Currently,
however, this
code asserts, which is wrong:
uint x = uint.max;
int y = -1;
assert(x > y);
Yes, this is really bad.
But last time I got the response that this is so to be compatible
with C.
That is what I really thought was the reason why D throw away
balast from C,
to fix bugs.
static if(Unqual!T == Unqual!U)
Nitpick: should be:
static if(is(Unqual!T == Unqual!U))
Of course.
[...]
else static if(isSigned!T && isUnsigned!U)
{
alias CommonType!(Unsigned!T, U) C;
return (a < 0) ? -1 : opCmp!(cast(C)a, cast(C)b);
}
else static if(isUnsigned!T && isSigned!U)
{
alias CommonType!(T, Unsigned!U) C;
return (b < 0) ? 1 : opCmp!(cast(C)a, cast(C)b);
}
[...]
Hmm. I wonder if there's a more efficient way to do this.
I'm sure. But I think it should be done at the compiler, not in a
library.
{...]
opCmp is just a single sub instruction (this is why opCmp is
defined the
way it is, BTW), whereas the "smart" signed/unsigned comparison
is 4
instructions long.
[...]
you can see, the branched version is 5 instructions long, and
always
causes a CPU pipeline hazard.
So I submit that the unbranched version is better. ;-)
I don't think so, because the branch will only be taken if the
signed
type is >= 0 (in fact unsigned). So if the signed/unsigned
comparison
is by accident, you pay the extra runtime. But if it is
intentional
the signed value is likely to be negative, so you get a correct
result
with no extra cost.
Even better for constants, where the compiler can not only
evaluate expressions like (uint.max > -1) correct, but it should
optimize them completely away!
(So much for premature optimization... now lemme go and actually
benchmark this stuff and see how well it actually performs in
practice.
Yes, we should do this.
Often, such kinds of hacks often perform more poorly than
expected due
to unforeseen complications with today's complex CPU's. So for
all I
know, I could've just been spouting nonsense above. :P)
I don't see such a compiler change as a hack. It is a strong
improvement IMHO.