Hi Ali,
Thank you so much! That is very helpful.
Thanks,
Qihua
On Wed, Nov 3, 2021 at 2:46 PM Ali Bahadir Zeybek wrote:
> Hello Qihua,
>
> This will require you to implement and maintain your own database insertion
> logic using any of the clients that your database and programming language
>
Hello Qihua,
This will require you to implement and maintain your own database insertion
logic using any of the clients that your database and programming language
supports. Bear in mind that you will be losing all the optimizations
Flink's connector
provides for you and this will add complexity t
Many thanks guys!
Hi Ali, for approach 2, what is the better way to do the database inserts
for this case? Currently we simply use JDBC SQL connector to sink to
database.
Thanks,
Qihua
On Wed, Nov 3, 2021 at 8:13 AM Ali Bahadir Zeybek wrote:
> Hello Qihua,
>
> If you do not care with the events
Hello Qihua,
If you do not care with the events that are not committed to DB,
you can use Async I/O [1] and implement a logic that
- does the database inserts
- completes the original events that are only accepted by DB
You can then sink this new datastream to kafka.
If you are also inter
An alternative is to use a CDC tool like Debezium to stream your table
changes, and then ingest that stream using Flink to push data later to
Kafka.
On Wed, Nov 3, 2021 at 6:17 AM Guowei Ma wrote:
> Hi, Qihua
>
> AFAIK there is no way to do it. Maybe you need to implement a "new" sink
> to archi
Hi, Qihua
AFAIK there is no way to do it. Maybe you need to implement a "new" sink to
archive this target.
Best,
Guowei
On Wed, Nov 3, 2021 at 12:40 PM Qihua Yang wrote:
> Hi,
>
> Our flink application has two sinks(DB and kafka topic). We want to push
> same data to both sinks. Is it possibl