There seem to be 2 orthogonal issues here - in effect how to log and where to log. I had a brief look and providing an option to log the dbname where appropriate seems to be quite easy - unless someone else is already doing it I will look at it on the weekend. Assuming that were done you could split the log based on dbname.

For the reasons Tom gives, logging to a table looks much harder and possibly undesirable - I would normally want my log table(s) in a different database, possibly even on a different machine, from my production transactional database. However, an ISP might want to provide the logs for each client in their designated db. It therefore seems to me far more sensible to do load logs into tables out of band as Tom suggests, possibly with some helper tools in contrib to parse the logs, or even to load them in more or less real time (many tools exist to do this sort of thing for web logs, so it is hardly rocket science - classic case for a perl script ;-).

cheers

andrew


[EMAIL PROTECTED] wrote:


On Mon, 28 Jul 2003, Tom Lane wrote:



Date: Mon, 28 Jul 2003 21:39:23 -0400
From: Tom Lane <[EMAIL PROTECTED]>
To: Robert Treat <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED], Larry Rosenman <[EMAIL PROTECTED]>,
    Josh Berkus <[EMAIL PROTECTED]>,
    pgsql-hackers list <[EMAIL PROTECTED]>
Subject: Re: [HACKERS] Feature request -- Log Database Name

Robert Treat <[EMAIL PROTECTED]> writes:


I think better would be a GUC "log_to_table" which wrote all standard
out/err to a pg_log table. of course, I doubt you could make this
foolproof (how to log startup errors in this table?) but it could be a
start.


How would a failed transaction make any entries in such a table?  How
would you handle maintenance operations on the table that require
exclusive lock?  (vacuum full, reindex, etc)

It seems possible that you could make this work if you piped stderr to a
buffering process that was itself a database client, and issued INSERTs
to put the rows into the table, and could buffer pending data whenever
someone else had the table locked (eg for vacuum).  I'd not care to try
to get backends to do it locally.

regards, tom lane


Not quite, my goal is to have a log per database, the stderr dosn't
contain enough information to split it.

As an ISP, I would like that each customer having one or more databases
being able to see any error on their database.
I imagine have a log file per database would be toot complicated...








---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster

Reply via email to