The basic operator functions also do not check for integer overflows.
This is a feature. I think that they should not check for overflow, as in C,
this is just int64_t arithmetic "as is".
(int64_t is not an available type on Windows btw.)
Possibly. I really meant "64 bits signed integers", whatever its name.
"int64_t" is the standard C99 name.
Finally I can think of good reason to use overflows deliberately, so I
think it would argue against such a change.
Could you show up an example? I am curious about that.
The one I can think of is the use of "SUM" to aggregate hashes for
computing a hash on a table. If SUM would overflow and stop this would
break the usage. Now there could be a case for having an overflow
detection on SUM, but that would be another SUM, preferably with a
distinct name.
\set cid debug(9223372036854775807 * 9223372036854775807)
debug(script=0,command=3): int 1
All these results are fine from my point of view.
On HEAD we are getting similar strange results,
Yep, this is not new.
so I am fine to withdraw but that's still very strange to me.
Arithmetic operator modulo are pretty strange, I can agree with that:-)
The first case is generating -9223372036854775808, the second one
compiles 9223372036854775807 and the third one generates 1.
This should be the "real" result modulo 2**64, if I'm not mistaken.
Or we make the error checks even more consistent in back-branches,
perhaps that 's indeed not worth it for a client though.
Yep, that would be another patch.
And this one generates a core dump:
\set cid debug(-9223372036854775808 / -1)
Floating point exception: 8 (core dumped)
This one is funny, but it is a fact of int64_t life: you cannot divide
INT64_MIN by -1 because the result cannot be represented as an int64_t.
This is propably hardcoded in the processor. I do not think it is worth
doing anything about it for pgbench.
This one, on the contrary, is generating an error on HEAD, and your
patch is causing a crash: value "9223372036854775808" is out of range
for type bigint That's a regression, no?
Hmmm, yes, somehow, but just for this one value, if you try the next:
pgbench 9.4.5: value "-9223372036854775809" is out of range for type bigint
I guess that the implementation before 9.5 converted
"-9223372036854775808" as an int, which is INT64_MIN, so it was fine. Now
it is parsed as "operator uminus" applied to "9223372036854775808", which
is not fine because this would be INT64_MAX+1, which is not possible.
I would prefer just to neglect that as a very small (1/2**64) feature
change rather than a meaningful regression, especially as the coding
effort to fix this is significant and the value of handling it differently
is nought.
I am uncomfortable with the fact of letting such holes in the code, even
if that's a client application.
This is a 2**128 probability case which stops pgbench. It is obviously
possible to add a check to catch it, and then generate an error message,
but I would rather just ignore it and let pgbench stop on that.
--
Fabien.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers