On Thursday, December 20, 2012, Jeff Janes wrote:
> On Thursday, December 20, 2012, Tom Lane wrote:
>
>>
>> What I did to try to duplicate Richard's situation was to create a test
>> table in which all the exit_state values were NULL, then build the
>> index, then UPDATE all but a small random fra
On Thursday, December 27, 2012, Andrew Dunstan wrote:
> On 12/26/2012 11:03 PM, Jeff Janes wrote:
>
>>
>> Do you have a huge number of tables? Maybe over the course of a
>> long-lived connection, it touches enough tables to bloat the relcache /
>> syscache. I don't know how the autovac would be
It looks like it was a problem with NFS. We're not really sure what was
wrong with it but once we failed over to an iSCSI mount for the data
everything is running just fine.
On Thu, Dec 27, 2012 at 6:12 PM, Tom Lane wrote:
> Nikolas Everett writes:
> > On Thu, Dec 27, 2012 at 4:33 PM, Tom Lan
Nikolas Everett writes:
> On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane wrote:
>> Nikolas Everett writes:
>>> We straced the backend during the explain and it looked like the open
>>> commands were taking several seconds each.
>> Kind of makes me wonder if you have a whole lot of tables ("whole lot
On Thu, Dec 27, 2012 at 4:33 PM, Tom Lane wrote:
> Nikolas Everett writes:
> > We straced the backend during the explain and it looked like the open
> > commands were taking several seconds each.
>
> Kind of makes me wonder if you have a whole lot of tables ("whole lot"
> in this context probabl
Nikolas Everett writes:
> We straced the backend during the explain and it looked like the open
> commands were taking several seconds each.
Kind of makes me wonder if you have a whole lot of tables ("whole lot"
in this context probably means tens of thousands) and are storing the
database on a f
Hey guys,
I recently stumbled over a Linux scheduler setting that has outright
shocked me. So, after reading through this:
http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html
it became readily apparent we were hitting the same wall. I could do a
pgbench and increase t
Hello Kevin,
I solved the issue.
I reproduced it immediatly after installing PostgreSQL 8.4.1.
I thougth they were using PostgreSQL 8.4.8 but was never able to reproduce
it with that version.
So something was changed related to my problem, but i didn't see explicitly
what in the change notes.
Nev
The partial index is highly leveraged. If every tuple in the
table is updated once, that amounts to every tuple in the index
> being updated 25,000 times.
How so? That sounds like O(n_2) behaviour.
If the table has 5 million rows while the index has 200 (active)
Hi Charles,
I am not working on Postgres anymore and none of our patches were ever
accepted by the community.
The list of development I made can still be found at
http://wiki.postgresql.org/wiki/Aster%27s_Development_Projects
All the code related to these improvements must still be accessible
Nikolas Everett writes:
> After more digging it looks like this table has an inordinate number
> of indices (10 ish).
10 doesn't sound like a lot.
> There a whole buch of conditional indicies for other
> columns that we're not checking. The particular column that is causing us
> trouble exists
Another other thing - the query seems to get faster after the first time we
plan it. I'm not sure that this is the case but I think it might be.
On Thu, Dec 27, 2012 at 2:28 PM, Nikolas Everett wrote:
> Sorry for the confusion around the queries. Both queries are causing
> trouble. We've not
Sorry for the confusion around the queries. Both queries are causing
trouble. We've noticed that just EXPLAINING the very simple queries takes
forever.
After more digging it looks like this table has an inordinate number
of indices (10 ish). There a whole buch of conditional indicies for other
2012/12/27 Stephen Frost :
> * Jeff Janes (jeff.ja...@gmail.com) wrote:
>> If the main goal is to make it faster, I'd rather see all of plpgsql get
>> faster, rather than just a special case of partitioning triggers. For
>> example, right now a CASE statement with 100 branches is about
>> the sam
Nikolas Everett writes:
> We just upgraded from 8.3 to 9.1 and we're seeing some performance
> problems. When we EXPLAIN ANALYZE our queries the explain result claim
> that the queries are reasonably fast but the wall clock time is way way
> longer. Does anyone know why this might happen?
> Lik
* Jeff Janes (jeff.ja...@gmail.com) wrote:
> If the main goal is to make it faster, I'd rather see all of plpgsql get
> faster, rather than just a special case of partitioning triggers. For
> example, right now a CASE statement with 100 branches is about
> the same speed as an equivalent list of
Pavel,
I've been trying to port the work of Emmanuel
http://archives.postgresql.org/pgsql-hackers/2008-12/msg01221.php
His implementation is pretty straight forward. Simple trigger doing constrain
checks with caching for bulk inserts.
So far that's what I got http://www.widesol.com/~charles/p
2012/12/27 Jeff Janes :
> On Wednesday, December 26, 2012, Pavel Stehule wrote:
>>
>> 2012/12/27 Jeff Janes :
>> >
>> > More automated would be nice (i.e. one operation to make both the check
>> > constraints and the trigger, so they can't get out of sync), but would
>> > not
>> > necessarily mean
On Monday, December 24, 2012, Charles Gomes wrote:
> By the way, I've just re-wrote the code to target the partitions
> individually and I've got almost 4 times improvement.
> Shouldn't it be faster to process the trigger, I would understand if there
> was no CPU left, but there is lots of cpu to
On Wednesday, December 26, 2012, Pavel Stehule wrote:
> 2012/12/27 Jeff Janes :
> >
> > More automated would be nice (i.e. one operation to make both the check
> > constraints and the trigger, so they can't get out of sync), but would
> not
> > necessarily mean faster.
>
>
Native implementation
On Thursday, December 27, 2012, Richard Neill wrote:
>
>
> On 27/12/12 16:17, Jeff Janes wrote:
>
>>
>> I still think your best bet is to get rid of the partial index and trade
>> the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).
>> I think that will be much less fragile t
Actually that last paragraph doesn't make much sense. Please ignore it.
On Thu, Dec 27, 2012 at 12:58 PM, Nikolas Everett wrote:
> New news - the hot slave seems to be performing as expected with no long
> pauses.
>
> It looks like we're using an archive_timeout of 60 seconds and default
> che
New news - the hot slave seems to be performing as expected with no long
pauses.
It looks like we're using an archive_timeout of 60 seconds and default
checkout_timeout and checkpoint_completion_target. I didn't do any of the
research on this. It seems like we're asking postgres to clear all of
Thanks!
http://explain.depesz.com/s/yfs
Looks like we're running a load of about 6. The machines have two physical
cores hyperthreaded to 32 cores.
Interesting - the data is stored on nfs on a netapp. We don't seem to have
a ton of nfs traffic.
Also we've got shared memory set to 48 gigs whic
On 12/26/2012 11:03 PM, Jeff Janes wrote:
On Fri, Dec 14, 2012 at 10:40 AM, Andrew Dunstan
wrote:
> One of my clients has an odd problem. Every so often a backend will
suddenly
> become very slow. The odd thing is that once this has happened it
remains
> slowed down, for all subsequent queries
On 27/12/12 17:21, François Beausoleil wrote:
Le 2012-12-27 à 12:10, Nikolas Everett a écrit :
We just upgraded from 8.3 to 9.1 and we're seeing some performance problems.
When we EXPLAIN ANALYZE our queries the explain result claim that the queries
are reasonably fast but the wall clock ti
Le 2012-12-27 à 12:10, Nikolas Everett a écrit :
> We just upgraded from 8.3 to 9.1 and we're seeing some performance problems.
> When we EXPLAIN ANALYZE our queries the explain result claim that the queries
> are reasonably fast but the wall clock time is way way longer. Does anyone
> know
Richard Neill writes:
> So, at the moment, I have 3 indexes:
>full: parcel_id_code
>full: exit_state
>full: parcel_id_code where exit state is null
> Am I right that when you suggest just a single, joint index
> (parcel_id_code,exit_state)
> instead of all 3 of the ot
On 27/12/12 16:17, Jeff Janes wrote:
I still think your best bet is to get rid of the partial index and trade
the full one on (parcel_id_code) for one on (parcel_id_code,exit_state).
I think that will be much less fragile than reindexing in a cron job.
So, at the moment, I have 3 indexes:
On Thursday, December 20, 2012, Jeff Janes wrote:
> On Thursday, December 20, 2012, Richard Neill wrote:
>
>>
>>
>> -> Bitmap Index Scan on tbl_tracker_exit_state_idx
>> (cost=0.00..8.36 rows=151 width=0) (actual time=7.946..7.946 rows=20277
>> loops=1)
>>
>
> This is finding 100 times m
> Date: Wed, 26 Dec 2012 23:03:33 -0500
> Subject: Re: [PERFORM] Performance on Bulk Insert to Partitioned Table
> From: jeff.ja...@gmail.com
> To: charle...@outlook.com
> CC: ondrej.iva...@gmail.com; pgsql-performance@postgresql.org
>
> On Monday, December 2
Markus,
Have you looked over here:
http://www.postgresql.org/docs/9.2/static/populate.html
> From: markus.innereb...@inf.unibz.it
> Subject: [PERFORM] Improve performance for writing
> Date: Thu, 27 Dec 2012 14:10:40 +0100
> To: pgsql-performance@postgres
Hello
please do not consider this email as an yet another question how to speed up
writing.
The situation is different:
My algorithm stores after the computation the result as tuples in a DB.
The tuples in addition to normal values (e.g. a,b) , contains sql statements
that fetch values (for in
The partial index is highly leveraged. If every tuple in the
table is updated once, that amounts to every tuple in the index
being updated 25,000 times.
How so? That sounds like O(n_2) behaviour.
If the table has 5 million rows while the index has 200 (active) rows
34 matches
Mail list logo