I'm generally -0.01 against narrow decimals. My experience in practice has
been that widening happens so quickly that they are little used and add
unnecessary complexity. For reference, the original Arrow code actually
implemented Decimal9 [1] and Decimal18 [2] but we removed both because of
this experience of complexity. (Good to note that we worked with them for
several years before the model was in the Arrow project before we came to
this conclusion.)

One of the other commenters here spoke of the benefit to things like tpch.
I doubt this would be meaningful as I believe most (if not all) decimal
operations in TPCH would typically immediately widen to DECIMAL38.

Another possible approach here might be to add DECIMAL18 to the spec and
see the usage with it (and how much value it really added) before
adding DECIMAL9.

It's easy to add types to the spec, hard to remove them.

[1]
https://github.com/apache/arrow/blob/fa5f0299f046c46e1b2f671e5e3b4f1956522711/java/vector/src/main/codegen/data/ValueVectorTypes.tdd#L66
[2]
https://github.com/apache/arrow/blob/fa5f0299f046c46e1b2f671e5e3b4f1956522711/java/vector/src/main/codegen/data/ValueVectorTypes.tdd#L81



>

Reply via email to