I think that we should definitely improve the documentation of
valueOf(double) to clarify that passing a float might not be the best
usage of the method, and suggest using the idiom
valueOf(Float.toString(f))
as already noted in this thread.
We can hardly add an override of valueOf(float).
Adding something like valueOfFloat(float) could be more viable, but
personally I doubt that there's much "market demand" for it. Moreover,
it would kind of break decades long naming conventions.
But more importantly, the models of binary floating-point values, like
float and double, and of decimal floating-point value, like BD, are
different.
Wanting to emulate decimal arithmetic with float/double or wishing to
emulate binary arithmetic with BD is asking for troubles in most cases.
This is to say that one needs to be very careful when mixing
float/double and BD and converting one value of one model to the other.
Greetings
Raffaello
On 2025-02-13 20:30, Kevin Bourrillion wrote:
My latest thoughts; please advise if I have misunderstood anything.
On Jan 24, 2025, at 3:11 AM, Jan Kowalski <jan7...@gmail.com> wrote:
I'd say that, if it's possible, we should reduce the arithmetic
artifacts, rather than introduce them through not really needed, and
not visible at the first sight, type conversions.
… Do you think introducing such change would be beneficial to simplify
the code, or rather introduce minor precision improvement, while we
still don’t have 100% decimal precision?
Okay, so what we’re looking for is a way to convert floats to
BigDecimals in such a way that `0.1f` comes out the same as `new
BigDecimal(“0.1”)`.
This thread is characterizing that outcome as “reducing artifacts” and
“improving precision”, which seems fair on the surface, but I believe
this is more like an illusion. I think the reason this looks like
obvious “improvement” to us is only because we happen to be using /
literals/ in our examples. But for a float value that isn’t a literal,
like our friend `0.1f + 0.2f`, I think that the illusion is shattered. I
think this “exposes" that the scale chosen by BD.valueOf(double) is
based on an “artifact” of that value that isn’t really meant to be
“information carried by” the value. (Were we to think the latter way, it
makes us think of a float-to-double cast as /losing information/, which
feels like nonsense.)
I think the fact that a new overload would affect current behavior means
we need to rule that option out. I don’t think this is a case that can
justify that cost. So it would at best have to be a new separately-named
method like `valueOfFloat`. So at best this issue will /still/ bite
users of `valueOf`, and we would still want the documentation of that
method to advise users on what to do instead.
My feeling is that all we need it to do is advise the user to call
`BigDecimal.valueOf(Float.toString(val))`. This is very transparent
about what’s really happening. Here the user is intentionally /
choosing/ the representation/scale.
I personally don’t see this as a case where fast-enough benchmark result
would justify adding a new method.
Your thoughts?