Hi Amit.

Humm, that's... challenging. I faced some issues after "the fix" because I had 
a couple of transactions with 25k updates, and I had to split it to be able to 
push to our event messaging system, as our max message size is 10MB. Relying on 
commit time would mean that all transaction operations will have the same 
timestamp. If something goes wrong while my worker is pushing that transaction 
data chunks, I will duplicate some data in the next run, so... this wouldn't 
allow me to deal with data duplication.
Is there any other way that you see to deal with it?

Right now I only see an option, which is to store all processed LSNs on the 
other side of the ETL. I'm trying to avoid that overhead.

Thanks.
Regards,
José Neves
________________________________
De: Amit Kapila <amit.kapil...@gmail.com>
Enviado: 7 de agosto de 2023 05:59
Para: José Neves <rafanev...@msn.com>
Cc: Andres Freund <and...@anarazel.de>; pgsql-hack...@postgresql.org 
<pgsql-hack...@postgresql.org>
Assunto: Re: CDC/ETL system on top of logical replication with pgoutput, custom 
client

On Sun, Aug 6, 2023 at 7:54 PM José Neves <rafanev...@msn.com> wrote:
>
> A follow-up on this. Indeed, a new commit-based approach solved my missing 
> data issues.
> But, getting back to the previous examples, how are server times expected to 
> be logged for the xlogs containing these records?
>

I think it should be based on commit_time because as far as I see we
can only get that on the client.

--
With Regards,
Amit Kapila.

Reply via email to