George Robinson II <[EMAIL PROTECTED]> writes:
> Last night, while my perl script was doing a huge insert operation, I
> got this error...
> DBD::Pg::st execute failed: ERROR: copy: line 4857, pg_atoi: error
> reading "2244904358": Result too large
> Now, I'm not sure if this is related, but while trying to do vacuumdb
> <dbname>, I got...
> NOTICE: FlushRelationBuffers(all_flows, 500237): block 171439 is
> referenced (private 0, global 1)
> FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2
Probably not related. We've seen sporadic reports of this error in 7.0,
but it's been tough to get enough info to figure out the cause. If you
can find a reproducible way to create the block-is-referenced condition
we'd sure like to know about it!
As a quick-hack recovery, you should find that stopping and restarting the
postmaster will eliminate the VACUUM failure. The block-is-referenced
condition is not all that dangerous in itself; VACUUM is just being
paranoid about the possibility that someone is using the table that it
thinks it has an exclusive lock on.
> Any ideas? I'm trying a couple other things right now. By the way,
> this database has one table that is HUGE. What is the limit on table
> size in postgresql7? The faq says unlimited. If that's true, how do
> you get around the 2G file size limit that (at least) I have in solaris
> 2.6?
We break tables into multiple physical files of 1Gb apiece.
regards, tom lane