Hi Dinesh,

   As far as I know, to implement the 2 phase commit protocol for one external 
system, I think the external system is required to provide some kind of 
transactions that could stay across sessions. With such a transaction mechansim 
then we could first start a new transaction and write the data, then we 
precommit the transaction when checkpointing and commit the transaction when on 
checkpoint complete notificatoin. After failover, we could be able to recover 
the transaction and abort them (if not precommitted) or commit them (if 
precommitted) again. As an example, for JDBC we may have to use XA transaction 
instead of normal JDBC transaction, since JDBC transaction will always be 
aborted when failover, even if we have precommitted.

  If such a transaction mechanism is not provided by the external system, we 
may have to use a secondary system (Like WAL logs or JDBC Table) to first cache 
the data and only write the data to the final system on commit. Note that since 
a transaction might be committed multiple times, the final system could still 
need to deduplicate the records or have some kind of transaction mechansim 
always aborted on failover.

Best,
Yun

------------------------------------------------------------------
Sender:C DINESH<dinesh.kitt...@gmail.com>
Date:2020/07/16 11:01:02
Recipient:user<user@flink.apache.org>
Theme:ElasticSearch_Sink

Hello All,

Can we implement 2 Phase Commit Protocol for elastic search sink. Will there be 
any limitations?

Thanks in advance.

Warm regards,
Dinesh. 

Reply via email to