Robert Haas <robertmh...@gmail.com> writes: > <dons asbestos underpants> > 4. There would presumably be some finite limit on the size of the > shared memory structure for aborted transactions. I don't think > there'd be any reason to make it particularly small, but if you sat > there and aborted transactions at top speed you might eventually run > out of room, at which point any transactions you started wouldn't be > able to abort until vacuum made enough progress to free up an entry.
Um, that bit is a *complete* nonstarter. The possibility of a failed transaction always has to be allowed. What if vacuum itself gets an error for example? Or, what if the system crashes? I thought for a bit about inverting the idea, such that there were a limit on the number of unvacuumed *successful* transactions rather than the number of failed ones. But that seems just as unforgiving: what if you really need to commit a transaction to effect some system state change? An example might be dropping some enormous table that you no longer need, but vacuum is going to insist on plowing through before it'll let you have any more transactions. I'm of the opinion that any design that presumes it can always fit all the required transaction-status data in memory is probably not even worth discussing. There always has to be a way for status data to spill to disk. What's interesting is how you can achieve enough locality of access so that most of what you need to look at is usually in memory. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers