Hi Luis,
Thanks for your reply.
It’s logging the quires which are taking more than specified time in
log_min_duration_statement().
Thanks so much.
Regards,
Dinesh Chandra
|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.
2017-02-23 17:45 GMT+01:00 Rowan Seymour :
> Not sure what other options we have other than an EAV approach since we
> allow users to define their own attribute types (attribute type is in
> contacts_contactfield, attribute value is in values_value). Would you
> expect modelling that with a JSON c
The parameter "log_directory" on postgresql.conf, you can define where it'll be
created.
The name of the log file is defined on "log_filename".
- Mensagem original -
De: "Dinesh Chandra 12108"
Para: "ProPAAS DBA" , pgsql-performance@postgresql.org
Cc: "Dinesh Chandra 12108"
Enviad
Not sure what other options we have other than an EAV approach since we
allow users to define their own attribute types (attribute type is in
contacts_contactfield, attribute value is in values_value). Would you
expect modelling that with a JSON column to perform better?
Thanks for the tips!
On 2
Thanks for reply.
May I know where it will create log??
In pg_log directory or somewhere else.
Regards,
Dinesh Chandra
|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.
--
Mobile: +91-9953975849 | Ext 1078
|dinesh.chan
set log_min_duration_statement = 300,000
(300,000 ms = 5min)
From the docs:
log_min_duration_statement (integer)
Causes the duration of each completed statement to be logged if the
statement ran for at least the specified number of milliseconds. Setting
this to zero prints all statement d
Dear expert,
I want to log only that queries which are taking around 5 minutes to execute.
I have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.
Thanks in advance.
Regards,
Dinesh Chandra
|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.
2017-02-23 15:02 GMT+01:00 Rowan Seymour :
> Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan
> then things get slow again. This is now what happens at LIMIT 695:
>
> Limit (cost=35945.78..50034.52 rows=695 width=88) (actual
> time=12852.580..12854.382 rows=695 loops=1)
>
Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan
then things get slow again. This is now what happens at LIMIT 695:
Limit (cost=35945.78..50034.52 rows=695 width=88) (actual
time=12852.580..12854.382 rows=695 loops=1)
Buffers: shared hit=6 read=66689
-> Merge Join (c
2017-02-23 14:11 GMT+01:00 Rowan Seymour :
> Hi guys
>
> I'm a bit stuck on a query that performs fantastically up to a certain
> limit value, after which the planner goes off in a completely different
> direction and performance gets dramatically worse. Am using Postgresql 9.3
>
> You can see all
Hi guys
I'm a bit stuck on a query that performs fantastically up to a certain
limit value, after which the planner goes off in a completely different
direction and performance gets dramatically worse. Am using Postgresql 9.3
You can see all the relevant schemas at http://pastebin.com/PNEqw2id an
Yes of course that’s all verified and taken into account during code
initialization
From: Vitalii Tymchyshyn [mailto:v...@tym.im]
Sent: Wednesday, February 22, 2017 8:14 PM
To: John Gorman; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Correct use of cursors for very large result sets
12 matches
Mail list logo