Thanks to all for your help.  I've adopted the scheme involving a
"staging" table -- the writer processes insert into that, then a single
"publisher" process pulls from that and writes to the log, giving a
clean serial order for any reader of the log.

    Vance 

On Mon, 2008-04-21 at 23:59 +0200, Joris Dobbelsteen wrote:
> Craig Ringer wrote:
> [snip]
> > If you really want to make somebody cry, I guess you could do it with 
> > dblink - connect back to your own database from dblink and use a short 
> > transaction to commit a log record, using table-based (rather than 
> > sequence) ID generation to ensure that records were inserted in ID 
> > order. That'd restrict the "critical section" in which your various 
> > transactions were unable to run concurrently to a much shorter period, 
> > but would result in a log message being saved even if the transaction 
> > later aborted. It'd also be eye-bleedingly horrible, to the point where 
> > even the "send a message from a C function" approach would be nicer.
> 
> This will not work for the problem the TS has. Let a single transaction 
> hang for a long enough time before commit, while others succeed. It will 
> keep ordering of changes, but commits might come unordered.
> 
> The issue is, you don't really have the critical section as you 
> describe, there is no SINGLE lock you are 'fighting' for.
> 
> It will work with an added table write lock (or up), that will be the 
> lock for your critical section.
> 
> In my opinion I would just forget about this one rather quickly as you 
> more or less proposed...
> 
> - Joris

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to