Hi Jan,
just to note that the benchmark results only measure the case 0.1f. For
more general floats, the perf gains might look less impressive (but I
didn't check).
Anyway, a fresh name seems to be the less risky solution, if we agree
that the problem is widespread enough to justify adding the new
fromFloat() factory method.
R
On 2025-01-24 12:11, Jan Kowalski wrote:
Thank you all for your replies!
I'm aware of the workaround (we are using this approach in our project)
and the problematic issues with decimal conversions. However, I also
think that we should make sure that the behaviour of the code is more
predictable. For me and other developers, it might be confusing that
values of /new BigDecimal(Float.toString(val))/, and then /
BigDecimal.valueOf(double val)/ are different. I'd say that, if it's
possible, we should reduce the arithmetic artifacts, rather than
introduce them through not really needed, and not visible at the first
sight, type conversions.
Unfortunately, I was aware about potential backwards compatibility
issues and I was curious what is your opinion on this (I also thought
about introducing a factory method like fromFloat to eliminate it, but
I'm not sure if it sounds like a good idea). Do you think introducing
such change would be beneficial to simplify the code, or rather
introduce minor precision improvement, while we still don't have 100%
decimal precision?
Also out of curiosity I ran a benchmark on how lack of this constructor
impacts performance, and it seems like type conversion makes it around 7
times slower, than direct Float usage
@Benchmark
public void oldApiFloat(Blackhole blackhole) {
for (int i = 0; i < iterations; i++) {
blackhole.consume(BigDecimal.valueOf(0.1f));
}
}
@Benchmark
public void newApiFloat(Blackhole blackhole) {
for (int i = 0; i < iterations; i++) {
blackhole.consume(valueOf(0.1f));
}
}
public static BigDecimal valueOf(float val) {
return new BigDecimal(Float.toString(val));
}
Benchmark (iterations) Mode
Cnt Score Error Units
BigDecimalBenchmark.newApiFloat 1000 thrpt 25 28355,359 ±
502,195 ops/s
BigDecimalBenchmark.newApiFloat 2000 thrpt 25 14132,275 ±
206,593 ops/s
BigDecimalBenchmark.newApiFloat 5000 thrpt 25 5667,007 ±
71,941 ops/s
BigDecimalBenchmark.newApiFloat 10000 thrpt 25 2808,114 ±
32,403 ops/s
BigDecimalBenchmark.newApiFloat 100000 thrpt 25 278,405 ±
4,642 ops/s
BigDecimalBenchmark.oldApiFloat 1000 thrpt 25 3559,235 ±
40,931 ops/s
BigDecimalBenchmark.oldApiFloat 2000 thrpt 25 1782,190 ±
21,805 ops/s
BigDecimalBenchmark.oldApiFloat 5000 thrpt 25 712,045 ±
6,495 ops/s
BigDecimalBenchmark.oldApiFloat 10000 thrpt 25 355,959 ±
6,006 ops/s
BigDecimalBenchmark.oldApiFloat 100000 thrpt 25 36,239 ±
0,423 ops/s
pt., 24 sty 2025 o 00:59 Joseph D. Darcy <joe.da...@oracle.com
<mailto:joe.da...@oracle.com>> napisał(a):
__
On 1/23/2025 2:35 PM, Remi Forax wrote:
Hello Jan,
what you are suggesting is not a backward compatible change.
There is a source compatibility impact, meaning that for some call
sites, the mapping of existing code using BigDecimal before and
after the addition of the overloaded method would change. That
wouldn't necessarily preclude us from making such a change (and such
changes have been made in the past), but extra caution and analysis
would be called for.
Cheers,
-Joe
If we add BigDecimal,valueOf(float), then a program recompiled
with the new JDK may change its behavior,
you can think that the new behavior is more "correct" that the
current one, but changing the behavior of existing programs is
usually a big NO ! in Java.
Also, I believe that the reason there is no such factory method
that takes a float is that doing computations on floats is not
recommanded, it becomes a mess rapidly of the imprecision of the
float32 representation, .
For the same reason, in Java, 2.0 is a double and there is no
FloatStream while there is a DoubleStream.
regards,
Rémi
------------------------------------------------------------------------