I ran the benchmark once again for a smaller float, and a bigger float, and
seems like suggested implementation still outperforms current one, however
as the number grows the difference seems to be less significant.


Benchmark                                          (iterations)
 (value)      Mode  Cnt      Score         Error  Units
BigDecimalBenchmark.newApiFloat       1000        111.111f         thrpt
25  11609,412 ± 51,333  ops/s
BigDecimalBenchmark.newApiFloat       1000    1343453.2344f  thrpt   25
 10081,560 ± 74,885  ops/s
BigDecimalBenchmark.oldApiFloat        1000         111.111f          thrpt
  25   4325,049 ± 40,065  ops/s
BigDecimalBenchmark.oldApiFloat       1000    1343453.2344f    thrpt   25
7228,148 ± 78,640  ops/s

For value 111.111f suggested constructor outperformed current
implementation by 2.68 times, but for value 1343453.2344f 1.4 times. I
still think it's a change that might be worthwhile (but without breaking
backwards compatbility of course), especially considering the original
issue, which was a precision loss when converting float to big decimal
through valueOf.


pt., 24 sty 2025 o 12:11 Jan Kowalski <jan7...@gmail.com> napisał(a):

> Thank you all for your replies!
>
> I'm aware of the workaround (we are using this approach in our project)
> and the problematic issues with decimal conversions. However, I also think
> that we should make sure that the behaviour of the code is more
> predictable. For me and other developers, it might be confusing that values
> of *new BigDecimal(Float.toString(val))*, and then *BigDecimal.valueOf(double
> val)* are different. I'd say that, if it's possible, we should reduce the
> arithmetic artifacts, rather than introduce them through not really needed,
> and not visible at the first sight, type conversions.
>
> Unfortunately, I was aware about potential backwards compatibility issues
> and I was curious what is your opinion on this (I also thought about
> introducing a factory method like fromFloat to eliminate it, but I'm not
> sure if it sounds like a good idea). Do you think introducing such change
> would be beneficial to simplify the code, or rather introduce minor
> precision improvement, while we still don't have 100% decimal precision?
>
> Also out of curiosity I ran a benchmark on how lack of this constructor
> impacts performance, and it seems like type conversion makes it around 7
> times slower, than direct Float usage
>
>     @Benchmark
>     public void oldApiFloat(Blackhole blackhole) {
>         for (int i = 0; i < iterations; i++) {
>             blackhole.consume(BigDecimal.valueOf(0.1f));
>         }
>     }
>
>     @Benchmark
>     public void newApiFloat(Blackhole blackhole) {
>         for (int i = 0; i < iterations; i++) {
>             blackhole.consume(valueOf(0.1f));
>         }
>     }
>
>     public static BigDecimal valueOf(float val) {
>         return new BigDecimal(Float.toString(val));
>     }
>
>
>     Benchmark                                     (iterations) Mode  Cnt
>   Score           Error  Units
> BigDecimalBenchmark.newApiFloat          1000  thrpt   25  28355,359 ±
> 502,195  ops/s
> BigDecimalBenchmark.newApiFloat          2000  thrpt   25  14132,275 ±
> 206,593  ops/s
> BigDecimalBenchmark.newApiFloat          5000  thrpt   25   5667,007 ±
>  71,941  ops/s
> BigDecimalBenchmark.newApiFloat         10000  thrpt   25   2808,114 ±
>  32,403  ops/s
> BigDecimalBenchmark.newApiFloat        100000  thrpt   25    278,405 ±
> 4,642  ops/s
> BigDecimalBenchmark.oldApiFloat          1000  thrpt   25   3559,235 ±
>  40,931  ops/s
> BigDecimalBenchmark.oldApiFloat          2000  thrpt   25   1782,190 ±
>  21,805  ops/s
> BigDecimalBenchmark.oldApiFloat          5000  thrpt   25    712,045 ±
> 6,495  ops/s
> BigDecimalBenchmark.oldApiFloat         10000  thrpt   25    355,959 ±
> 6,006  ops/s
> BigDecimalBenchmark.oldApiFloat        100000  thrpt   25     36,239 ±
> 0,423  ops/s
>
> pt., 24 sty 2025 o 00:59 Joseph D. Darcy <joe.da...@oracle.com>
> napisał(a):
>
>> On 1/23/2025 2:35 PM, Remi Forax wrote:
>>
>> Hello Jan,
>> what you are suggesting is not a backward compatible change.
>>
>>
>> There is a source compatibility impact, meaning that for some call sites,
>> the mapping of existing code using BigDecimal before and after the addition
>> of the overloaded method would change. That wouldn't necessarily preclude
>> us from making such a change (and such changes have been made in the past),
>> but extra caution and analysis would be called for.
>>
>> Cheers,
>>
>> -Joe
>>
>>
>>
>> If we add BigDecimal,valueOf(float), then a program recompiled with the
>> new JDK may change its behavior,
>> you can think that the new behavior is more "correct" that the current
>> one, but changing the behavior of existing programs is usually a big NO !
>> in Java.
>>
>> Also, I believe that the reason there is no such factory method that
>> takes a float is that doing computations on floats is not recommanded, it
>> becomes a mess rapidly of the imprecision of the float32 representation, .
>> For the same reason, in Java, 2.0 is a double and there is no FloatStream
>> while there is a DoubleStream.
>>
>> regards,
>> Rémi
>>
>> ------------------------------
>>
>>

Reply via email to