Hi Paul,

Are you using old planner? Did you try blink planner? I guess it maybe a
bug in old planner which doesn't work well on new types.

Best,
Jark

On Thu, 19 Mar 2020 at 16:27, Paul Lam <paullin3...@gmail.com> wrote:

> Hi,
>
> Recently I upgraded a simple application that inserts static data into a
> table from 1.9.0 to 1.10.0, and
> encountered a timestamp type incompatibility problem during the table sink
> validation.
>
> The SQL is like:
> ```
> insert into kafka.test.tbl_a # schema: (user_name STRING, user_id INT,
> login_time TIMESTAMP)
> select ("ann", 1000, TIMESTAMP "2019-12-30 00:00:00")
> ```
>
> And the error thrown:
> ```
> Field types of query result and registered TableSink
> `kafka`.`test`.`tbl_a` do not match.
> Query result schema: [EXPR$0: String, EXPR$1: Integer, EXPR$2: Timestamp]
> TableSink schema: [user_name: String, user_id: Integer, login_time:
> LocalDateTime]
> ```
>
> After some digging, I found the root cause might be that since FLINK-14645
> timestamp fields
> defined via TableFactory had been bridged to LocalDateTime, but timestamp
> literals are
> still backed by java.sql.Timestamp.
>
> Is my reasoning correct? And is there any workaround? Thanks a lot!
>
> Best,
> Paul Lam
>
>

Reply via email to