>>
>> *From:* JING ZHANG
>> *Date:* 2021-06-04 18:32
>> *To:* Yun Gao
>> *CC:* 1095193...@qq.com; user
>> *Subject:* Re: Flink sql regular join not working as expect.
>> Hi,
>> JDBC source only does a snapshot and sends all datas in the snapshot to
source when it works as right
> stream of a regular join in future?
>
> --
> 1095193...@qq.com
>
>
> *From:* JING ZHANG
> *Date:* 2021-06-04 18:32
> *To:* Yun Gao
> *CC:* 1095193...@qq.com; user
> *Subject:* Re: Flink sql regular join not
: Re: Flink sql regular join not working as expect.
Hi,
JDBC source only does a snapshot and sends all datas in the snapshot to
downstream when it works as a right stream of a regular join, it could not
produce a changlog stream.
After you update the field 'target' from '56.32
Hi,
JDBC source only does a snapshot and sends all datas in the snapshot to
downstream when it works as a right stream of a regular join, it could not
produce a changlog stream.
After you update the field 'target' from '56.32.15.55:8080' to '
56.32.15.54:8080', JDBC source would not send new data
Hi,
I'm not the expert for the table/sql, but it seems to me that for regular
joins, Flink would not re-read the dimension
table after it has read it fully for the first time. If you want to always join
the records with the latest version of
dimension table, you may need to use the temporal j
Hi
I am working on joining a Kafka stream with a Postgres Dimension table.
Accoring to:
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/queries/joins/
"Regular joins are the most generic type of join in which any new record, or
changes to either side of the j