Hi
I'd like to know if there is a way to specify different transaction isolation levels for different tables in the db. The reason i'm asking this (rather bizarre sounding, i know ;-) ) question is the following:
I'm importing about 2 million records into my application each day (the data is more or less fully replaced each day). My importer updates only a few tables (about 5 - 10), but reads a lot of other tables (10 or so) while importing. Those (read-only, meta-information) tables contains information on how to
import the data, and what reports to calculate from the imported data.
My import sometimes crashed, becausse the meta-information tables are changed while importing (e.h, I pass a id to a function, the function does some calculations, than tries to select the row with the given id, but fails, because the row was deleted in the meantime). I understand that the standard approach to this problem is to set the transaction isolation level to "serializeable", thus avoiding non-repeatable reads.
But since the import is a lenghty operation (a few hours), I don't want to import in a searializeable transaction, since it would force me to import "in a loop" until no serialization error occurs while importing.
But since it's only the meta-information tables for which I want to avoid non-repeatable reads, and since those are read-only anyway (for my importer), I wouldn't have to fear getting "serialization errors" when I access only those tables in serializeable mode (since read-only transaction never trigger serialization errors).
I know I could simulate something like that using dblink, but if possible I'd prefer a simpler approach (Using dblink would meand that I need to rewrite large parts of import, since it's mostly stored procedures).
greetings, Florian Pflug
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html