CASSANDRA-11935 introduced arithmetic operators, and alongside these came 
implicit casts for their operands.  There is a semantic decision to be made, 
and I think the project would do well to explicitly raise this kind of question 
for wider input before release, since the project is bound by them forever more.

In this case, the choice is between lossy and lossless casts for operations 
involving integers and floating point numbers.  In essence, should:

(1) float + int = float, double + bigint = double; or
(2) float + int = double, double + bigint = decimal; or
(3) float + int = decimal, double + bigint = decimal

Option 1 performs a lossy implicit cast from int -> float, or bigint -> double. 
 Simply casting between these types changes the value.  This is what MS SQL 
Server does.
Options 2 and 3 cast without loss of precision, and 3 (or thereabouts) is what 
PostgreSQL does.

The question I’m interested in is not just which is the right decision, but how 
the right decision should be arrived at.  My view is that we should primarily 
aim for least surprise to the user, but I’m keen to hear from others.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org

Reply via email to