Hello Mike,

You could also try exploring “Performance Insights” for the RDS instances.
Personally I  found that helpful when debugging some issues.

Regards,
Praveen

On Fri, Apr 5, 2019 at 6:54 AM Mamet, Eric (GfK) <eric.ma...@gfk.com> wrote:

> It looks like I missed some functionality of LOG_STATEMENT such as
> filtering on the duration (log_min_duration_statement)
>
>
>
> So maybe log_statement is what I am looking for, combined with some
> cloudwatch monitoring on the log?
>
>
>
>
>
>
>
> *From:* Mamet, Eric (GfK)
> *Sent:* 04 April 2019 17:28
> *To:* 'pgsql-performa...@postgresql.org' <pgsql-performa...@postgresql.org
> >
> *Subject:* monitoring options for postgresql under AWS/RDS?
>
>
>
> Hi there,
>
>
>
> I would like to monitor our postgresql instance under AWS-RDS to get some
> alert (or log) if any query runs over a certain amount of time, like 1.5
> seconds.
>
> I would like to know which query took over that time (and how long), when
> and which parameters it used.
>
> The exact parameters are important because the amount of data retrieved
> varies a lot depending on parameters.
>
> I would like to know when it happened to be able to correlate it with the
> overall system activity.
>
>
>
> I came across
>
> ·         pg_stat_statements is very useful BUT it gives me stats rather
> than specific executions.
> In particular, I don’t know the exact time it happened and the parameters
> used
>
> ·         log_statement but this time I don’t see how I would filter on
> “slow” queries and it seems dumped into the RDS log… not very easy to use
> and maybe too heavy for a production system
>
> ·         pg_hero is great but looks like an interactive tool (unless I
> missed something) and I don’t think it gives me the exact parameters and
> time (not sure…)
>
>
>
> Is there a tool I could use to achieve that?
>
>
>
> Thanks
>
>
>
> Eric
>

Reply via email to