We ran into an open file limit on the DB host (Mac OS X 10.9.0, Postgres 9.3.2) 
and caused the familiar "ERROR:  unexpected chunk number 0 (expected 1) for 
toast value 155900302 in pg_toast_16822" when selecting data.

Previously when we've run into this kind of corruption we could find the 
specific corrupted rows in the table and delete by ctid.  However, this time 
we're running into a persistent "ERROR:  tuple concurrently updated" when 
deleting by ctid.

munin2=# select ctid from testruns where id = 141889653;
     ctid     
--------------
 (37069816,3)
(1 row)

munin2=# delete from testruns where ctid = '(37069816,3)';
ERROR:  tuple concurrently updated

This always occurs and seems to prevent us from cleaning up the database by 
removing the corrupted rows.

Before attempting to do more drastic things like restart the postgres instance, 
is there some known way of getting around this error and cleaning up the 
corruption (other than the full replicate / reindex / suggestions from around 
the web that are more involved than deleting corrupted rows by ctid).

thanks,

        ~ john


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to