Hi,
We recently upgraded our database from 9.1 to 9.6. We are seeing some
unusual slow queries after the upgrade.
Sometimes the queries are faster after vacuum analyze, but not consistent.
We tried with different settings of random_page_cost, work_mem,
effective_cache_size but the query results
On 2014/04/28 07:52 PM, Jeff Janes wrote:
On Mon, Apr 28, 2014 at 10:12 AM, Michael van Rooyen
mailto:mich...@loot.co.za>> wrote:
It looks like something is causing your IO to seize up briefly. It is
common for the sync phase of the checkpoint to do that, but that would
only explain 3 of th
Michael van Rooyen writes:
> On 2014/04/28 07:50 PM, Tom Lane wrote:
>> Hm ... it seems pretty suspicious that all of these examples take just
>> about exactly 1 second longer than you might expect. I'm wondering
>> if there is something sitting on an exclusive table lock somewhere,
>> and releas
On 2014/04/28 07:50 PM, Tom Lane wrote:
Michael van Rooyen writes:
I'm trying to get to the bottom of a performance issue on a server
running PostgreSQL 9.3.1 on Centos 5.
Hm ... it seems pretty suspicious that all of these examples take just
about exactly 1 second longer than you might expec
On Mon, Apr 28, 2014 at 10:12 AM, Michael van Rooyen wrote:
> I'm trying to get to the bottom of a performance issue on a server running
> PostgreSQL 9.3.1 on Centos 5. The machine is a dual quad-core Xeon E5620
> with 24GB ECC RAM and four enterprise SATA Seagate Constellation ES drives
> config
Michael van Rooyen writes:
> I'm trying to get to the bottom of a performance issue on a server
> running PostgreSQL 9.3.1 on Centos 5.
Hm ... it seems pretty suspicious that all of these examples take just
about exactly 1 second longer than you might expect. I'm wondering
if there is something
I'm trying to get to the bottom of a performance issue on a server
running PostgreSQL 9.3.1 on Centos 5. The machine is a dual quad-core
Xeon E5620 with 24GB ECC RAM and four enterprise SATA Seagate
Constellation ES drives configured as 2 software RAID1 volumes. The
main DB is on one volume a
Hello Kevin,
I solved the issue.
I reproduced it immediatly after installing PostgreSQL 8.4.1.
I thougth they were using PostgreSQL 8.4.8 but was never able to reproduce
it with that version.
So something was changed related to my problem, but i didn't see explicitly
what in the change notes.
Nev
Ghislain ROUVIGNAC wrote:
>> I would leave default_statistics_target alone unless you see a lot of
>> estimates which are off by more than an order of magnitude. Even then, it
>> is often better to set a higher value for a few individual columns than for
>> everything.
>
>
> We had an issue with
Ghislain ROUVIGNAC wrote:
> Memory : In use 4 Go, Free 15Go, cache 5 Go.
If the active portion of your database is actually small enough
that it fits in the OS cache, I recommend:
seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.05
> I plan to increase various parameters as follow:
Ghislain ROUVIGNAC wrote:
> Threre is a vacuum analyze planned during the night.
> The morning, 1 day out of 2, there are some extremely slow
> queries. Those queries lasts more than 5 minutes (never waited
> more and cancelled them) whereas when everything is OK they last
> less than 300ms.
>
>
Hello,
I have a customer that experience a strange behaviour related to statictics.
Threre is a vacuum analyze planned during the night.
The morning, 1 day out of 2, there are some extremely slow queries.
Those queries lasts more than 5 minutes (never waited more and cancelled
them) whereas when
On 16 Listopad 2011, 18:31, Cody Caughlan wrote:
>
> On Nov 16, 2011, at 8:52 AM, Tomas Vondra wrote:
>
>> On 16 Listopad 2011, 2:21, Cody Caughlan wrote:
>>> How did you build your RAID array? Maybe I have a fundamental flaw /
>>> misconfiguration. I am doing it via:
>>>
>>> $ yes | mdadm --create
On Nov 16, 2011, at 8:52 AM, Tomas Vondra wrote:
> On 16 Listopad 2011, 2:21, Cody Caughlan wrote:
>> How did you build your RAID array? Maybe I have a fundamental flaw /
>> misconfiguration. I am doing it via:
>>
>> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4
>> /dev/xvdb
On 16 Listopad 2011, 2:21, Cody Caughlan wrote:
> How did you build your RAID array? Maybe I have a fundamental flaw /
> misconfiguration. I am doing it via:
>
> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4
> /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
> $ pvcreate /dev/md0
> $ vgc
On 16 Listopad 2011, 5:27, Greg Smith wrote:
> On 11/14/2011 01:16 PM, Cody Caughlan wrote:
>> We're starting to see some slow queries, especially COMMITs that are
>> happening more frequently. The slow queries are against seemingly
>> well-indexed tables.
>> Slow commits like:
>>
>> 2011-11-14 17:
On 11/14/2011 01:16 PM, Cody Caughlan wrote:
We're starting to see some slow queries, especially COMMITs that are
happening more frequently. The slow queries are against seemingly
well-indexed tables.
Slow commits like:
2011-11-14 17:47:11 UTC pid:14366 (44/0-0) LOG: duration: 3062.784 ms
sta
On 16 Listopad 2011, 2:21, Cody Caughlan wrote:
> How did you build your RAID array? Maybe I have a fundamental flaw /
> misconfiguration. I am doing it via:
>
> $ yes | mdadm --create /dev/md0 --level=10 -c256 --raid-devices=4
> /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
> $ pvcreate /dev/md0
> $ vgc
On Tue, Nov 15, 2011 at 5:16 PM, Tomas Vondra wrote:
> Dne 14.11.2011 22:58, Cody Caughlan napsal(a):
>> I ran bonnie++ on a slave node, doing active streaming replication but
>> otherwise idle:
>> http://batch-files-test.s3.amazonaws.com/sql03.prod.html
>>
>> bonnie++ on the master node:
>> http:
Dne 14.11.2011 22:58, Cody Caughlan napsal(a):
> I ran bonnie++ on a slave node, doing active streaming replication but
> otherwise idle:
> http://batch-files-test.s3.amazonaws.com/sql03.prod.html
>
> bonnie++ on the master node:
> http://batch-files-test.s3.amazonaws.com/sql01.prod.html
>
> If I
Dne 15.11.2011 01:13, Cody Caughlan napsal(a):
> The first two are what I would think would be largely read operations
> (certainly the SELECT) so its not clear why a SELECT consumes write
> time.
>
> Here is the output of some pg_stat_bgwriter stats from the last couple of
> hours:
>
> https://
On Mon, Nov 14, 2011 at 2:57 PM, Tomas Vondra wrote:
> On 14 Listopad 2011, 22:58, Cody Caughlan wrote:
>>> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is
>>> usually too low).
>>
>> Ok, will do.
>
> Yes, but find out what that means and think about the possible impact
>
On 14 Listopad 2011, 22:58, Cody Caughlan wrote:
>> Seems reasonable, although I'd bump up the checkpoint_timeout (the 5m is
>> usually too low).
>
> Ok, will do.
Yes, but find out what that means and think about the possible impact
first. It usually improves the checkpoint behaviour but increases
Thanks for your response. Please see below for answers to your questions.
On Mon, Nov 14, 2011 at 11:22 AM, Tomas Vondra wrote:
> On 14 Listopad 2011, 19:16, Cody Caughlan wrote:
>> shared_buffers = 3584MB
>> wal_buffers = 16MB
>> checkpoint_segments = 32
>> max_wal_senders = 10
>> checkpoint_com
On 14 Listopad 2011, 19:16, Cody Caughlan wrote:
> shared_buffers = 3584MB
> wal_buffers = 16MB
> checkpoint_segments = 32
> max_wal_senders = 10
> checkpoint_completion_target = 0.9
> wal_keep_segments = 1024
> maintenance_work_mem = 256MB
> work_mem = 88MB
> shared_buffers = 3584MB
> effective_ca
Hi, running Postgres 9.1.1 on an EC2 m1.xlarge instance. Machine is a
dedicated master with 2 streaming replication nodes.
The machine has 16GB of RAM and 4 cores.
We're starting to see some slow queries, especially COMMITs that are
happening more frequently. The slow queries are against seemingl
On Mon, Mar 2, 2009 at 2:24 PM, Tim Bunce wrote:
> On Mon, Mar 02, 2009 at 02:29:31PM -0500, Tom Lane wrote:
>> Brian Cox writes:
>> > select locktype,database,relation,virtualxid,virtualtransaction,pid,mode
>> > from pg_locks order by mode;
>>
>> If you hadn't left out the "granted" column we co
On Mon, Mar 02, 2009 at 02:29:31PM -0500, Tom Lane wrote:
> Brian Cox writes:
> > select locktype,database,relation,virtualxid,virtualtransaction,pid,mode
> > from pg_locks order by mode;
>
> If you hadn't left out the "granted" column we could be more sure,
> but what it looks like to me is the
Tom Lane [...@sss.pgh.pa.us] wrote:
Well, that's certainly a sufficient reason, if perhaps not the only
reason. Dropping ts_defects_20090227 will require removal of FK triggers
on ts_transets, and we can't do that concurrently with transactions that
might be trying to fire those triggers.
Now a
Brian Cox writes:
> So, the idle transaction is the problem. Thanks to you, Scott Carey and
> Robert Haas for pointing this out. However, why does the drop of
> ts_defects_20090227 need exclusive access to ts_transets? I assume it
> must be due to this FK?
> alter table ts_defects_20090227 add
Tom Lane [...@sss.pgh.pa.us] wrote:
If you hadn't left out the "granted" column we could be more sure,
but what it looks like to me is the DROP (pid 13842) is stuck behind
the transaction (pid 13833). In particular these two rows of
pg_locks look like a possible conflict:
> relation |
Brian Cox writes:
> select locktype,database,relation,virtualxid,virtualtransaction,pid,mode
> from pg_locks order by mode;
If you hadn't left out the "granted" column we could be more sure,
but what it looks like to me is the DROP (pid 13842) is stuck behind
the transaction (pid 13833). In pa
On Mon, Mar 2, 2009 at 1:22 PM, Brian Cox wrote:
> As you can see there are only 3 transactions and 1 starts 1 hour after
> the drop begins. I'm still trying to figure out how to interpret the
> pg_locks output, but (presumably) you/others on this forum have more
> experience at this than I.
I'm
In my experience, 13833, " in transaction" is your culprit. It is a
transaction that has been there for 10 hours longer than all others, and is
doing nothing at all. It has locks on a lot of objects in there. You'll have
to take the oid's in the lock table and look them up in the pg_class tab
Tom Lane [...@sss.pgh.pa.us] wrote:
[ shrug... ] You tell us. To me it sounds a whole lot like some client
program sitting on an open transaction that has a nonexclusive lock on
the table to be dropped. That transaction wasn't necessarily doing any
useful work; it might have just been waiting
"Cox, Brian" writes:
>> Probably because the DROP is trying to acquire exclusive lock on its
>> target table, and some other transaction already has a read or write
>> lock on that table, and everything else is queuing up behind the DROP.
>> It's not a true deadlock that is visible to the databas
>Probably because the DROP is trying to acquire exclusive lock on its
>target table, and some other transaction already has a read or write
>lock on that table, and everything else is queuing up behind the DROP.
>It's not a true deadlock that is visible to the database, or else
>Postgres would ha
Brian Cox writes:
> Actually, they're all deadlocked. The question is why?
Probably because the DROP is trying to acquire exclusive lock on its
target table, and some other transaction already has a read or write
lock on that table, and everything else is queuing up behind the DROP.
It's not a t
On Sat, Feb 28, 2009 at 9:51 PM, Brian Cox wrote:
> Actually, they're all deadlocked. The question is why?
>
> Here's a brief background. The ts_defects table is partitioned by occurrence
> date; each partition contains the rows for 1 day. When the data gets old
> enough, the partition is dropped.
Actually, they're all deadlocked. The question is why?
Here's a brief background. The ts_defects table is partitioned by
occurrence date; each partition contains the rows for 1 day. When the
data gets old enough, the partition is dropped. Since the correct
partition can be determined from the
On Sat, 2009-02-14 at 15:02 -0500, Tom Lane wrote:
> Octavio Alvarez writes:
> > The result, on the above view: ~80ms. Fair enough. But if I apply a
> > condition:
> > SELECT * FROM ___pgnui_relation_tree.raw_relation_tree WHERE
> > parent_schema <> child_schema;
> > it takes ~2 seconds (!) to com
Octavio Alvarez writes:
> The result, on the above view: ~80ms. Fair enough. But if I apply a
> condition:
> SELECT * FROM ___pgnui_relation_tree.raw_relation_tree WHERE
> parent_schema <> child_schema;
> it takes ~2 seconds (!) to complete.
I'm not sure I'm seeing the exact same case as you, but
I'm aware you already know that information_schema is slow [1] [2], so I
just want to expose/document another case and tests I did.
I'm using the following view to check what tables depend on what other
tables.
CREATE VIEW raw_relation_tree AS
SELECT
tc_p.table_catalog AS parent_catalog,
tc
Tom Lane <[EMAIL PROTECTED]> writes:
>> Thanks for the help guys! That was my problem. I actually need the
>> backup_location_rid index for a different query so I am going to keep
>> it.
>
> Well, you don't really *need* it; the two-column index on (record_id,
> backup_id) will serve perfectly
"Tyrrill, Ed" <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> This combination of indexes:
>>
>>> Indexes:
>>> "backup_location_pkey" PRIMARY KEY, btree (record_id, backup_id)
>>> "backup_location_rid" btree (record_id)
>>
>> is really just silly. You should have the pkey and
On Fri, May 18, 2007 at 02:22:52PM -0700, Tyrrill, Ed wrote:
> Total runtime: 4.951 ms
Going from 1197 seconds to 5 milliseconds. That's some sort of record in a
while, I think :-)
/* Steinar */
--
Homepage: http://www.sesse.net/
---(end of broadcast)---
Tom Lane <[EMAIL PROTECTED]> writes:
>
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> > Secondly, it might be more efficient for the planner to choose the
> > backup_location_rid index than the combination primary key index.
>
> Oh, I'm an idiot; I didn't notice the way the index was set up.
> Ye
Scott Marlowe <[EMAIL PROTECTED]> writes:
> Secondly, it might be more efficient for the planner to choose the
> backup_location_rid index than the combination primary key index.
Oh, I'm an idiot; I didn't notice the way the index was set up. Yeah,
that index pretty well sucks for a query on bac
Tyrrill, Ed wrote:
> mdsdb=# \d backup_location
> Table "public.backup_location"
> Column | Type | Modifiers
> ---+-+---
> record_id | bigint | not null
> backup_id | integer | not null
> Indexes:
> "backup_location_pkey" PRIMARY KEY, btree (record_id, backup_
"Tyrrill, Ed" <[EMAIL PROTECTED]> writes:
> Index Scan using backup_location_pkey on backup_location
> (cost=0.00..1475268.53 rows=412394 width=8) (actual
> time=3318.057..1196723.915 rows=2752 loops=1)
>Index Cond: (backup_id = 1070)
> Total runtime: 1196725.617 ms
If we take that at face v
Tyrrill, Ed wrote:
I have a two column table with over 160 million rows in it. As the size
of the table grows queries on this table get exponentially slower. I am
using version 8.1.5 32-bit on Red Hat Enterprise Linux 3. The hardware
is an Intel 3 Ghz Xeon with 4GB RAM, and 6 disks in a RAID 5
I have a two column table with over 160 million rows in it. As the size
of the table grows queries on this table get exponentially slower. I am
using version 8.1.5 32-bit on Red Hat Enterprise Linux 3. The hardware
is an Intel 3 Ghz Xeon with 4GB RAM, and 6 disks in a RAID 5
configuration. For
On Tue, Apr 25, 2006 at 07:53:15PM +0200, PFC wrote:
What version is this??
> annonces=> EXPLAIN ANALYZE SELECT * FROM test.current WHERE id IN (SELECT
> annonce_id FROM bookmarks WHERE list_id IN ('4'));
>QUERY PLAN
> -
PFC <[EMAIL PROTECTED]> writes:
> The IN() is quite small (150 values), but the two large tables are
> seq-scanned... is there a way to avoid this ?
Not in 8.1. HEAD is a bit smarter about joins to Append relations.
regards, tom lane
---(e
Here is a simple test case for this strange behaviour :
annonces=> CREATE TABLE test.current (id INTEGER PRIMARY KEY, description
TEXT);
INFO: CREATE TABLE / PRIMARY KEY creera un index implicite
<> pour la table <>
CREATE TABLE
annonces=> CREATE TABLE test.archive (id INTEGER PRI
=?ISO-8859-1?Q?=DCmit_=D6ztosun?= <[EMAIL PROTECTED]> writes:
> Our application uses typical queries similar to following (very simplified):
> SELECT
> part_id,
> part_name,
> (SELECT
> SUM(amount) FROM part_movements M
> WHERE P.part_id =3D M.part_id
> ) as part_amount
> FROM parts P
> OR
DW wrote:
Hello,
I'm perplexed. I'm trying to find out why some queries are taking a long
time, and have found that after running analyze, one particular query
becomes slow.
This query is based on a view that is based on multiple left outer joins
to merge data from lots of tables.
If I d
On 11/11/05, DW <[EMAIL PROTECTED]> wrote:
I'm perplexed. I'm trying to find out why some queries are taking a longtime, and have found that after running analyze, one particular querybecomes slow.
i have had exactly the same problem very recently.
what helped? increasing statistics on come column
DW <[EMAIL PROTECTED]> writes:
> In the meantime, again I'm new to this -- I got pg_stats; which rows are
> the relevent ones?
The ones for columns that are mentioned in the problem query.
I don't think you need to worry about columns used only in the SELECT
output list, but anything used in WH
Tom Lane wrote:
It would be interesting to see EXPLAIN ANALYZE results in both cases,
plus the contents of the relevant pg_stats rows. (BTW, you need not
dump and reload to get back to the virgin state --- just delete the
relevant rows from pg_statistic.) Also we'd want to know exactly what
PG
DW <[EMAIL PROTECTED]> writes:
> I'm perplexed. I'm trying to find out why some queries are taking a long
> time, and have found that after running analyze, one particular query
> becomes slow.
This implies that the planner's default choice of plan (without any
statistics) is better than its ch
Hello,
I'm perplexed. I'm trying to find out why some queries are taking a long
time, and have found that after running analyze, one particular query
becomes slow.
This query is based on a view that is based on multiple left outer joins
to merge data from lots of tables.
If I drop the dat
On Fri, 2005-05-27 at 07:52 -0500, Josh Close wrote:
> > Setting shared buffers above something like 10-30% of memory is counter
> > productive.
>
> What is the reason behind it being counter productive? If shared
> buffers are at 30%, should effective cache size be at 70%? How do
> those two rela
>On 5/31/05, Martin Fandel <[EMAIL PROTECTED]> wrote:
>> In the documentation of
>> http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
>> is the shared_buffers set to 1/3 of the availble RAM.
Well, it says "you should never use more than 1/3 of your available RAM"
which is not quite t
On 5/31/05, Martin Fandel <[EMAIL PROTECTED]> wrote:
> In the documentation of
> http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
> is the shared_buffers set to 1/3 of the availble RAM. You're set
> 5*8/1024=391 MB SHMEM. The effective_cache_size in your
> configuration is 45
Josh Close <[EMAIL PROTECTED]> writes:
> There is 2 gigs of mem in this server. Here are my current settings.
> max_connections = 100
> shared_buffers = 5
> sort_mem = 4096
> vacuum_mem = 32768
> effective_cache_size = 45
> Shared buffers is set to 10% of total mem. Effective cache size i
I didn't see iostat as available to install, but I'm using dstat to see this.
The server has constant disk reads averaging around 50M and quite a
few in the 60M range. This is when selects are being done, which is
almost always. I would think if postgres is grabbing everything from
memory that thi
Hi,
I had some disk io issues recently with NFS, I found the command 'iostat
-x 5' to be a great help when using Linux.
For example here is the output when I do a 10GB file transfer onto hdc
Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/swkB/s
avgrq-sz avgqu-sz await svct
Doing the query
explain
SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent
FROM adaption.tblBatchHistory_1
WHERE tStamp > ( now() - interval '5 mins' )::text
gives me this:
Aggregate (cost=32138.33..32138.33 rows=1 width=4)
-> Seq Scan on tblbatchhistory_1 (cost=0.00..31996.10 rows=56891 width
> I think you really want that seqscan to be an indexscan, instead.
> I'm betting this is PG 7.4.something? If so, probably the only
> way to make it happen is to simplify the now() expression to a constant:
>
> SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent
> FROM adap
Josh Close <[EMAIL PROTECTED]> writes:
> this_sQuery := \'
> SELECT COALESCE( SUM( iNumSent ), 0 ) AS iNumSent
> FROM adaption.tblBatchHistory_\' || this_rServerIds.iId || \'
> WHERE tStamp > now() - interval \'\'5 mins\'\';
> \';
> Here is the e
> Few "mandatory" questions:
>
> 1. Do you vacuum your db on regular basis? :)
It's vacuumed once every hour. The table sizes and data are constantly changing.
>
> 2. Perhaps statistics for tables in question are out of date, did you
> try alter table set statistics?
No I haven't. What would
On 5/26/05, Christopher Kings-Lynne <[EMAIL PROTECTED]> wrote:
> > I have some queries that have significan't slowed down in the last
> > couple days. It's gone from 10 seconds to over 2 mins.
> >
> > The cpu has never gone over 35% in the servers lifetime, but the load
> > average is over 8.0 righ
> Setting shared buffers above something like 10-30% of memory is counter
> productive.
What is the reason behind it being counter productive? If shared
buffers are at 30%, should effective cache size be at 70%? How do
those two relate?
>
> Increasing sort_mem can help with various activities, b
I have some queries that have significan't slowed down in the last
couple days. It's gone from 10 seconds to over 2 mins.
The cpu has never gone over 35% in the servers lifetime, but the load
average is over 8.0 right now. I'm assuming this is probably due to
disk io.
You sure it's not a severe
On 5/26/05, Josh Close <[EMAIL PROTECTED]> wrote:
> I have some queries that have significan't slowed down in the last
> couple days. It's gone from 10 seconds to over 2 mins.
>
> The cpu has never gone over 35% in the servers lifetime, but the load
> average is over 8.0 right now. I'm assuming th
Josh Close wrote:
>I have some queries that have significan't slowed down in the last
>couple days. It's gone from 10 seconds to over 2 mins.
>
>The cpu has never gone over 35% in the servers lifetime, but the load
>average is over 8.0 right now. I'm assuming this is probably due to
>disk io.
>
>I
I have some queries that have significan't slowed down in the last
couple days. It's gone from 10 seconds to over 2 mins.
The cpu has never gone over 35% in the servers lifetime, but the load
average is over 8.0 right now. I'm assuming this is probably due to
disk io.
I need some help setting up
78 matches
Mail list logo