* Tom Lane (t...@sss.pgh.pa.us) wrote: > Michael Paquier <michael.paqu...@gmail.com> writes: > > Now what about a json format logging with one json object per log entry? > > > A single json entry would need more space than a csv one as we need to > > track the field names with their values. Also, there is always the > > argument that if an application needs json-format logs, it could use > > csvlog on Postgres-side and do the transformation itself. But wouldn't > > it be a win for application or tools if such an option is available > > in-core? > > I think the extra representational overhead is already a good reason to > say "no". There is not any production scenario I've ever heard of where > log output volume isn't a consideration.
The flip side is that there are absolutely production cases where what we output is either too little or too much- being able to control that and then have the (filtered) result in JSON would be more-or-less exactly what a client of ours is looking for. To try to clarify that a bit, as it comes across as rather opaque even on my re-reading, consider a case where you can't have the "credit_card_number" field ever exported to an audit or log file, but you're required to log all other changes to a table. Then consider that such a situation extends to individual INSERT or UPDATE commands- you need the command logged, but you can't have the contents of that column in the log file. Our current capabilities around logging and auditing are dismal and extremely frustrating when faced with these kinds of, quite real, requirements. I'll be in an internal meeting more-or-less all day tomorrow discussing auditing and how we might make things easier for organizations which have these requirements- would certainly welcome any thoughts in that direction. Thanks, Stephen
signature.asc
Description: Digital signature