APseudoUtopia <apseudouto...@gmail.com> writes:
>> Here's what happened:
>> 
>> $ vacuumdb --all --full --analyze --no-password
>> vacuumdb: vacuuming database "postgres"
>> vacuumdb: vacuuming database "web_main"
>> vacuumdb: vacuuming of database "web_main" failed: ERROR:  huge tuple

> PostgreSQL 8.4.0 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC)
> 4.2.1 20070719  [FreeBSD], 32-bit

This is evidently coming out of ginHeapTupleFastCollect because it's
formed a GIN tuple that is too large (either too long a word, or too
many postings, or both).  I'd say that this represents a serious
degradation in usability from pre-8.4 releases: before, you would have
gotten the error upon attempting to insert the table row that triggers
the problem.  Now, with the "fast insert" stuff, you don't find out
until VACUUM fails, and you have no idea where the bad data is.  Not cool.

Oleg, Teodor, what can we do about this?  Can we split an oversize
tuple into multiple entries?  Can we apply suitable size checks
before instead of after the fast-insert queue?

                        regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to