On 2017-02-03 17:47:50 -0500, Robert Haas wrote: > On Thu, Feb 2, 2017 at 7:14 PM, Craig Ringer <cr...@2ndquadrant.com> wrote: > > We could make reorder buffers persistent and shared between decoding > > sessions but it'd totally change the logical decoding model and create > > some other problems. It's certainly not a topic for this patch. So we > > can take it as given that we'll always restart decoding from BEGIN > > again at a crash.
Sharing them seems unlikely (filtering and such would become a lot more complicated) and separate from persistency. I'm not sure however how it'd "totally change the logical decoding model"? Even if we'd not always restart decoding, we'd still have the option to add the information necessary to the spill files, so I'm unclear how persistency plays a role here? > OK, thanks for the explanation. I have never liked this design very > much, and told Andres so: big transactions are bound to cause > noticeable replication lag. But you're certainly right that it's not > a topic for this patch. Streaming and persistency of spill files are different topics, no? Either would have initially complicated things beyond the point of getting things into core - I'm all for adding them at some point. Persistent spill files (which'd also spilling of small transactions at regular intervals) also has the issue that it makes the spill format something that can't be adapted in bugfixes etc, and that we need to fsync it. I still haven't seen a credible model for being able to apply a stream of interleaved transactions that can roll back individually; I think we really need the ability to have multiple transactions alive in one backend for that. Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers