I have a job that summarizes some data from a table, truncates the table, drops 
the indexes, writes the summarized data back into the table, then recreates the 
indexes.  The operations above are not in a single transaction, but separate 
statements executed by a script.   Easy, runs great, has for years.  

Recently the job takes a little longer to run and is still going when the 
database dump starts. That's when I started getting this:

ERROR:  cache lookup failed for index 70424
STATEMENT:  SELECT t.tableoid, t.oid, t.relname AS indexname, 
pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef, t.relnatts AS indnkeys, 
i.indkey, i.indisclustered, c.contype, c.conname, c.tableoid AS contableoid, 
c.oid AS conoid, (SELECT spcname FROM pg_catalog.pg_tablespace s WHERE s.oid = 
t.reltablespace) AS tablespace, array_to_string(t.reloptions, ', ') AS options 
FROM pg_catalog.pg_index i JOIN pg_catalog.pg_class t ON (t.oid = i.indexrelid) 
LEFT JOIN pg_catalog.pg_depend d ON (d.classid = t.tableoid AND d.objid = t.oid 
AND d.deptype = 'i') LEFT JOIN pg_catalog.pg_constraint c ON (d.refclassid = 
c.tableoid AND d.refobjid = c.oid) WHERE i.indrelid = '56448'::pg_catalog.oid 
ORDER BY indexname

The oid value changes daily, of course.  pg_dump then disconnects and stops.  I 
checked the source code and the query is definitely coming from pg_dump.  When 
I run the dump a few minutes later it works beautifully, so I don't think the 
system catalogs are corrupt.

My questions are: can making DDL changes during a dump cause this error?  Are 
the queries used by pg_dump transactionally consistent, i.e. do they run in a 
transaction and get a single view of the database system catalogs?  Other than 
finer coordination of jobs, how can this situation be avoided?

I'm running PG 8.4.1 on linux.

Thanks in advance for your responses.

Bob Lunney


      

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to