Re: [PERFORM] planner index choice

2010-07-28 Thread Tom Lane
Chris writes: > The query: > SELECT >assetid, custom_val > FROM >sq_ast_attr_val > WHERE >attrid IN (SELECT attrid FROM sq_ast_attr WHERE name = > 'is_contextable' AND (type_code = 'metadata_field_select' OR > owning_type_code = 'metadata_field')) >AND contextid = 0 > INTERSECT

Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.

2010-07-28 Thread Robert Haas
On Wed, Jul 28, 2010 at 4:10 PM, Tom Lane wrote: > Robert Haas writes: >> However, if we don't support that, we can't do any sort of pooling-ish >> thing without the ability to pass file descriptors between processes; >> and Tom seems fairly convinced there's no portable way to do that. > > Well,

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Greg Spiegelberg
On Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga wrote: > Yeb Havinga wrote: > >> Due to the LBA remapping of the SSD, I'm not sure of putting files that >> are sequentially written in a different partition (together with e.g. >> tables) would make a difference: in the end the SSD will have a set new

[PERFORM] planner index choice

2010-07-28 Thread Chris
Hi there, I have a simple query where I don't understand the planner's choice to use a particular index. The main table looks like this: # \d sq_ast_attr_val Table "public.sq_ast_attr_val" Column| Type | Modifiers -+-

Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.

2010-07-28 Thread Andres Freund
On Wed, Jul 28, 2010 at 04:10:08PM -0400, Tom Lane wrote: > Robert Haas writes: > > However, if we don't support that, we can't do any sort of pooling-ish > > thing without the ability to pass file descriptors between processes; > > and Tom seems fairly convinced there's no portable way to do that

Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.

2010-07-28 Thread Tom Lane
Robert Haas writes: > However, if we don't support that, we can't do any sort of pooling-ish > thing without the ability to pass file descriptors between processes; > and Tom seems fairly convinced there's no portable way to do that. Well, what it would come down to is: are we prepared to not sup

Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.

2010-07-28 Thread Robert Haas
On Wed, Jul 28, 2010 at 3:44 PM, Josh Berkus wrote: > On 7/27/10 6:56 PM, Tom Lane wrote: >> Yeah, if it weren't for that I'd say "sure let's try it".  But I'm >> afraid we'd be introducing significant headaches in return for a gain >> that's quite speculative. > > Well, the *gain* isn't speculati

Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.

2010-07-28 Thread Josh Berkus
On 7/27/10 6:56 PM, Tom Lane wrote: > Yeah, if it weren't for that I'd say "sure let's try it". But I'm > afraid we'd be introducing significant headaches in return for a gain > that's quite speculative. Well, the *gain* isn't speculative. For example, I am once again dealing with the issue that

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Yeb Havinga
Yeb Havinga wrote: Michael Stone wrote: On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If you

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Yeb Havinga
Michael Stone wrote: On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If you dedicate a partitio

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Michael Stone
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If you dedicate a partition to xlog, you already

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Michael Stone
On Mon, Jul 26, 2010 at 01:47:14PM -0600, Scott Marlowe wrote: Note that SSDs aren't usually real fast at large sequential writes though, so it might be worth putting pg_xlog on a spinning pair in a mirror and seeing how much, if any, the SSD drive speeds up when not having to do pg_xlog. xlog

Re: [PERFORM] Big difference in time returned by EXPLAIN ANALYZE SELECT ... AND SELECT ...

2010-07-28 Thread Vitalii Tymchyshyn
27.07.10 02:03, Lew написав(ла): Piotr Gasidło wrote: EXPLAIN ANALYZE SELECT ... Total runtime: 4.782 ms Time: 25,970 ms Strangely, the runtime is shown with a period for the separator, though. One value is calculated on server by EXPLAIN ANALYZE command, another is calculated by psql itself

Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.

2010-07-28 Thread Vitalii Tymchyshyn
28.07.10 04:56, Tom Lane написав(ла): I'm not asserting it's true, just suggesting it's entirely possible. Other than the fork() cost itself and whatever authentication activity there might be, practically all the startup cost of a new backend can be seen as cache-populating activities. You'd h

Re: [PERFORM] 158x query improvement when removing 2 (noop) WHERE conditions

2010-07-28 Thread Andres Freund
On Wednesday 28 July 2010 12:27:44 Louis-David Mitterrand wrote: > The EXPLAIN ANALYSE output is attached with, first the fast version and > then the slow one. I think you forgot to attach it. Andres -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to

[PERFORM] 158x query improvement when removing 2 (noop) WHERE conditions

2010-07-28 Thread Louis-David Mitterrand
Hi, On a hunch I removed two (legacy) WHERE conditions from the following query I obtained a 158x speed improvement. Yet these condiditions do not filter anything. Does that make any sense? The EXPLAIN ANALYSE output is attached with, first the fast version and then the slow one. I'd like to und