Re: [PERFORM] Performance issues

2015-03-16 Thread Jim Nasby
On 3/16/15 3:59 PM, Tomas Vondra wrote: On 16.3.2015 20:43, Jim Nasby wrote: On 3/13/15 7:12 PM, Tomas Vondra wrote: (4) I suspect many of the relations referenced in the views are not actually needed in the query, i.e. the join is performed but then it's just discarded because thos

Re: [PERFORM] EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT

2015-03-16 Thread Gunnlaugur Thor Briem
On Mon, Mar 16, 2015 at 7:24 PM, Jim Nasby wrote: > The other thing you should consider is using TRUNCATE instead of an > un-filtered DELETE. It will both be much faster to perform and won't leave > any dead rows behind. Yep, but it does take an ACCESS EXCLUSIVE lock. We want the old table cont

Re: [PERFORM] Performance issues

2015-03-16 Thread Tomas Vondra
On 16.3.2015 20:43, Jim Nasby wrote: > On 3/13/15 7:12 PM, Tomas Vondra wrote: >> (4) I suspect many of the relations referenced in the views are not >> actually needed in the query, i.e. the join is performed but >> then it's just discarded because those columns are not used. >> Try

Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-16 Thread Scott Marlowe
On Mon, Mar 16, 2015 at 6:59 AM, Robert Kaye wrote: > > 4. Linux 3.2 apparently has some less than desirable swap behaviours. Once > we started swapping, everything went nuts. On older machines I used to just turn off swap altogether. Esp if I wasn't running out of memory but swap was engaging an

Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-16 Thread Jim Nasby
On 3/15/15 7:17 PM, mich...@sqlexec.com wrote: Please avoid top-posting. I agree with your counter argument about how high max_connections "can" cause problems, but max_connections may not part of the problem here. There's a bunch of "depends stuff" in there based on workload details, # cpus, RA

Re: [PERFORM] Performance issues

2015-03-16 Thread Jim Nasby
On 3/13/15 7:12 PM, Tomas Vondra wrote: (4) I suspect many of the relations referenced in the views are not actually needed in the query, i.e. the join is performed but then it's just discarded because those columns are not used. Try to simplify the views as much has possible - rem

Re: [PERFORM] Best VPS provider for running performant PostgreSQL database server

2015-03-16 Thread Jim Nasby
On 3/16/15 12:08 AM, Some Developer wrote: I wasn't sure whether to post this in general, admin or performance but since it is basically a performance question I went with performance. I'm about to launch a new a website that is written using the Django web framework and has PostgreSQL as the da

Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-16 Thread Joao Junior
Robert many thanks for feedback!! Could you post your new pgbouncer config file?? How many postgresql process do you have now at OS with this new conf?? How many clients from app server hit your pgbouncer?? Regards, Regards, 2015-03-16 11:32 GMT-03:00 Robert Kaye : > > > On Mar 16, 2015, at

Re: [PERFORM] EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT

2015-03-16 Thread Jim Nasby
On 3/11/15 10:54 AM, Gunnlaugur Thor Briem wrote: (Even better, just make the new table not temporary, and have it replace the former table altogether. But that's for later; requires some broader changes in our application.) The other thing you should consider is using TRUNCATE instead of an u

Re: [PERFORM] EXPLAIN (no ANALYZE) taking an hour for INSERT FROM SELECT

2015-03-16 Thread Jim Nasby
On 3/11/15 11:15 AM, Tom Lane wrote: Gunnlaugur Thor Briem writes: Yes, I think that's it: I've just realized that immediately prior to the INSERT, in the same transaction, an unfiltered DELETE has been issued; i.e. the whole table is being rewritten. Then the INSERT is issued ... with a WHERE

Re: [PERFORM] Bad cost estimate with FALSE filter condition

2015-03-16 Thread Josh Berkus
On 03/16/2015 11:26 AM, Tom Lane wrote: > Josh Berkus writes: >> So ... should I assume my diagnosis is correct? Haven't heard any other >> suggestions. > > I don't see any reason to think this is worth worrying about, or worth > spending planner cycles on to produce a cosmetically nicer cost es

Re: [PERFORM] Performance issues

2015-03-16 Thread Vivekanand Joshi
Hey guys, thanks a lot. This is really helping. I am learning a lot. BTW, I changed CTE into subquery and it improved the performance by miles. I am getting the result in less than 3 seconds, though I am using a 24 GB ram server. It is still a great turnaround time as compared to previous executi

Re: [PERFORM] Bad cost estimate with FALSE filter condition

2015-03-16 Thread Tom Lane
Josh Berkus writes: > So ... should I assume my diagnosis is correct? Haven't heard any other > suggestions. I don't see any reason to think this is worth worrying about, or worth spending planner cycles on to produce a cosmetically nicer cost estimate. One-time filters always apply at the top p

Re: [PERFORM] Performance issues

2015-03-16 Thread Tomas Vondra
On 16.3.2015 18:49, Marc Mamin wrote: > >> Hi Team, >> >> This is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY: FWIW, this is a somewhat more readable version of the plan: http://explain.depesz.com/s/nbB In the future, please do two things: (1) Attach the plan as a te

Re: [PERFORM] Performance issues

2015-03-16 Thread Marc Mamin
>Hi Team, > >This is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY: >Rows Removed by Join Filter: 3577676116 That's quite a lot. You're possibly missing a clause in a join, resulting in a cross join. It is also helpful to put your result here

Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-16 Thread Josh Berkus
On 03/16/2015 05:59 AM, Robert Kaye wrote: > 4. Linux 3.2 apparently has some less than desirable swap behaviours. > Once we started swapping, everything went nuts. Relevant to this: http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html Anybody who is on Linux Kernels 3

Re: [PERFORM] Bad cost estimate with FALSE filter condition

2015-03-16 Thread Josh Berkus
So ... should I assume my diagnosis is correct? Haven't heard any other suggestions. On 02/27/2015 05:28 PM, Josh Berkus wrote: > All: > > This got posted to pgsql-bugs, but got no attention there[1], so I'm > sending it to this list. > > Test case: > > createdb bench > pgbench -i -s bench > \

Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-16 Thread Robert Kaye
> On Mar 16, 2015, at 2:22 PM, Thomas Kellerer wrote: > > I think it would be nice if you can amend your blog posting to include the > solution that you found. > > Otherwise this will simply stick around as yet another unsolved performance > problem Good thinking: http://blog.musicbrain

Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-16 Thread Andreas Kretschmer
Robert Kaye wrote: > However, I am glad to report that our problems are fixed and that our server > is > back to humming along nicely. > > What we changed: > > 1. As it was pointed out here, max_connections of 500 was in fact insanely > high, especially in light of using PGbouncer. Before we

Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-16 Thread Thomas Kellerer
Robert Kaye schrieb am 16.03.2015 um 13:59: > However, I am glad to report that our problems are fixed and that our > server is back to humming along nicely. > > And as I said to Josh earlier: "Postgres rocks our world. I’m > immensely pleased that once again the problems were our own stupidity >

Re: [PERFORM] MusicBrainz postgres performance issues

2015-03-16 Thread Robert Kaye
On March 16, 2015 at 3:24:34 AM, Roxanne Reid-Bennett (r...@tara-lu.com) wrote: Robert, Wow - You've engaged the wizards indeed. I haven't heard or seen anything that would answer my *second* question if faced with this (my first would have been "what changed") Yes, indeed — I feel honored

Re: [PERFORM] Performance issues

2015-03-16 Thread Vivekanand Joshi
Hi Team, This is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY: === Nested Loop (cost=33666.96..37971.39 rows=1 width=894) (actual time=443.556..966558.767 rows=45360 loops=1) Join Filter: (tp_exec.touchpoint_execution_id = valid_