> Interesting, when I went to copy my data directory out of the way, I
> received this from cp:
>
> cp: data/base/16976/17840: Result too large
>
> might be a clue
I don't think it's PostgreSQL. I would suggest unmounting the volume and
running fsck (or the equivalent for your environment.)
I
Starting in single user mode and reindexing the database didn't fix the
error, although it seemed to run just fine.
Vacuum verbose ran until it hit the tfxtrade_details table and then it
died with that same error. it didn't whine about any other problems
prior to dying.
INFO: --Relation publi
Hello,
There are a couple of things it could be. I would suggest that you take
down the database, start it up with -P? (I think it is -o '-P' it might
be -p '-O' I don't recall) and try and reindex the database itself.
You can also do a vacuuum verbose and see if you get some more errors you
m
Both vacuum [full] and reindex fail with that same error.
vacuum is run regularly via a cron job.
-jason
On Feb 14, 2004, at 2:29 PM, Joshua D. Drake wrote:
Hello,
When was the last time you ran a reindex? Or a vacuum / vacuum full?
Sincerely,
Joshua D. Drake
On Sat, 14 Feb 2004, Jason Essing
Hello,
When was the last time you ran a reindex? Or a vacuum / vacuum full?
Sincerely,
Joshua D. Drake
On Sat, 14 Feb 2004, Jason Essington wrote:
> I am running PostgreSQL 7.3.3 on OS X Server 10.2
>
> The database has been running just fine for quite some time now, but
> this morning it be
I am running PostgreSQL 7.3.3 on OS X Server 10.2
The database has been running just fine for quite some time now, but
this morning it began pitching the error:
ERROR: cannot read block 176 of tfxtrade_details: Numerical result
out of range
any time the table tfxtrade_details is accessed.
A d