Hi Timo,

Though it's an extreme case, I still think this is a hard blocker if we
would ingest data from an RDBMS (and other systems supporting large
precision numbers).

The tricky part is that users can declare numeric types without any
precision and scale restrictions in RDBMS (e.g., NUMBER in Oracle[1]), but
in Flink, we must explicitly specify the precision and scale.

Cc Jark, do you think this is a problem for flink-cdc-connectors?

Best,
Xingcan

[1]
https://docs.oracle.com/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT313

On Mon, Aug 30, 2021 at 4:12 AM Timo Walther <twal...@apache.org> wrote:

> Hi Xingcan,
>
> in theory there should be no hard blocker for supporting this. The
> implementation should be flexible enough at most locations. We just
> adopted 38 from the Blink code base which adopted it from Hive.
>
> However, this could be a breaking change for existing pipelines and we
> would need to offer a flag to bring back the old behavior. It would
> definitely lead to a lot of testing work to not cause inconsistencies.
>
> Do you think this is a hard blocker for users?
>
> Regards,
> Timo
>
>
> On 28.08.21 00:21, Xingcan Cui wrote:
> > Hi all,
> >
> > Recently, I was trying to load some CDC data from Oracle/Postgres
> databases
> > and found that the current precision range [1, 38] for DecimalType may
> not
> > meet the requirement for some source types. For instance, in Oracle, if a
> > column is declared as `NUMBER` without precision and scale, the values in
> > it could potentially be very large. As DecimalType is backed by Java
> > BigDecimal, I wonder if we should extend the precision range.
> >
> > Best,
> > Xingcan
> >
>
>

Reply via email to