Re: [GENERAL] postmaster.pid file auto-clean up?

2012-08-20 Thread Sebastien Boisvert
ing on the data directory (as per `pg_ctl status`). On Aug 20 2012, at 1:31 PM, Tom Lane wrote: > Sebastien Boisvert writes: >> I vaguely remember reading in the release notes (around the time 9.x was >> released) something about it automatically clearing out the postmaster.pid >&

[GENERAL] postmaster.pid file auto-clean up?

2012-08-20 Thread Sebastien Boisvert
I vaguely remember reading in the release notes (around the time 9.x was released) something about it automatically clearing out the postmaster.pid file if it was found to be stale/invalid when starting the the database server, however I cannot find any reference to this anymore. Was this somet

[GENERAL] Significance of numbers in server errors?

2011-03-04 Thread Sebastien Boisvert
I'm wondering if there's a description anywhere of the significance of number reported in errors; for example I've recently run into this error: ERROR: could not read block 132 of relation 1663/16430/1249: read only 0 of 8192 bytes >From some documentation I've read (http://etutorials.org/SQL

Re: [GENERAL] Problems backing up

2010-04-07 Thread Sebastien Boisvert
- Original Message > From: Tom Lane >> [ COPY fails to dump a 138MB bytea column ] > I wonder whether you are doing anything that exacerbates > the memory requirement, for instance by forcing an encoding conversion to > something other than the database's server_encoding. Our backups

[GENERAL] Determine if postmaster can accept new connections

2010-01-31 Thread Sebastien Boisvert
I'm not sure if this is the best list to ask... I have a need to know if the server is able to accept connections - is there a way to call canAcceptConnections() from the front end somehow? Thanks. __ Yahoo! Canada Toolbar:

[GENERAL] Problems backing up

2010-01-31 Thread Sebastien Boisvert
Hi all, We have an OS X app which integrates postgres as its database backend, and recently we've have a couple of cases where users haven't been able to perform a backup of their database. The failure gets reported as a problem in a table ("largedata") where we store large binary objects, wi