It's a real problem. I saw this pattern more than once already.
The people have several schemas with identical data structures as a preparation to eventual migration of the schema to its own server in the cloud. So you have ten schemas, one generic user from connection pool and pg_stat_statements is useless to identify a performance problem for a schema. You see ten different records with the different queryid and identical query text, you see that one record indicates a performance issue for this query in one particular schema, and you can't say what schema it is. A report that groups queries by the text may hide this problem altogether
because in average the statistics are OK.
I hoped that it would be possible to reassemble the query text after parsing where the names were already uniquely resolved.

Thank you,

Sergei

On 11/28/2018 2:59 PM, Tom Lane wrote:
Alvaro Herrera <alvhe...@2ndquadrant.com> writes:
On 2018-Nov-28, Tom Lane wrote:
This would also entail rather significant overhead to find out schema
names and interpolate them into the text.
True.  I was thinking that the qualified-names version of the query
would be obtained via ruleutils or some similar mechanism to deparse
from the parsed query tree (not from the original query text), where
only pg_catalog is considered visible.  This would be enabled using a
GUC that defaults to off.
Color me skeptical --- ruleutils has never especially been designed
to be fast, and I can't see that the overhead of this is going to be
acceptable to anybody who needs pg_stat_statements in production.
(Some admittedly rough experiments suggest that we might be
talking about an order-of-magnitude slowdown for simple queries.)

                        regards, tom lane


Reply via email to