* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote: > Stephen Frost wrote: > > > The flip side is that there are absolutely production cases where what > > we output is either too little or too much- being able to control that > > and then have the (filtered) result in JSON would be more-or-less > > exactly what a client of ours is looking for. > > My impression is that the JSON fields are going to be more or less > equivalent to the current csvlog columns (what else could it be?). So > if you can control what you give your auditors by filtering by > individual JSON attributes, surely you could count columns in the > hardcoded CSV definition we use for csvlog just as well.
I don't want to invent a CSV and SQL parser to address this requirement.. That'd be pretty horrible. > > To try to clarify that a bit, as it comes across as rather opaque even > > on my re-reading, consider a case where you can't have the > > "credit_card_number" field ever exported to an audit or log file, but > > you're required to log all other changes to a table. Then consider that > > such a situation extends to individual INSERT or UPDATE commands- you > > need the command logged, but you can't have the contents of that column > > in the log file. > > It seems a bit far-fetched to think that you will be able to rip out > parts of queries by applying JSON operators to the query text. Perhaps > your intention is to log queries using something similar to the JSON > blobs I'm using the DDL deparse patch? Right- we need to pass the queries through a normalization structure which can then consider what's supposed to be sent on to the log file- ideally that would happen on a per-backend basis, allowing the filtering to be parallelized. It's quite a bit more than what we've currently got going on, which is more-or-less "dump the string we were sent", certainly. > My own thought is: JSON is good, but sadly it doesn't cure cancer. Unfortunately, straw-man arguments don't either. ;) Thanks! Stephen
signature.asc
Description: Digital signature