Hi,
I have a query that's pulling data for another system using COPY (query) to
STDOUT CSV on a 9.2.4 db (we're in the process of upgrading to 9.3). The
final csv file is large (~75GB, 86 million rows). The query is also large,
consisting of one table (86 million rows) left joined to a total of
alternating with heap scan when a tight limit is present so not so
much work has to be done, but I could see that being really problematic to
implement and use.
Tim
On Tue, Sep 27, 2011 at 1:06 AM, Samuel Gendler
wrote:
>
>
> On Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett wrote:
>
>
.089 ms
(28 rows)
This is sub millisecond for all combinations of ids present or not that
we've tried, so we'll definitely go with this. Thanks for the help and
explanations!
Tim
On Tue, Sep 27, 2011 at 8:40 AM, Timothy Garnett wrote:
> Hi Sam,
>
> The purpose of this (frame
> Well, the reason it likes the first plan is that that has a smaller
> estimated cost ;-). Basically this is a startup-time-versus-total-time
> issue: the cost of the seqscan+limit is estimated to be about 1/8447'th
> of the time to read the whole table, since it's estimating 8447
> candidate mat
Hi All,
We are currently using PostgreSQL 9.0.3 and we noticed a performance anomaly
from a framework (ActiveRecord) generated query to one of our tables. The
query uses an in clause to check an indexed column for the presence of
either of two values. In this particular case neither of them is p
Thanks, we'll give these a try.
Tim
On Thu, Mar 17, 2011 at 2:13 PM, Kevin Grittner wrote:
> Timothy Garnett wrote:
>
> > We'd still be interested in other suggestions for convincing the
> > query planner not to pick the bad plan in this case
>
> You could
ce we cluster on supplier_id, buyer_id and a
given buyer_id while having up to 100k rows will generally only have a few
supplier ids
Tim
On Wed, Mar 16, 2011 at 1:05 PM, Shaun Thomas wrote:
> On 03/15/2011 01:23 PM, Timothy Garnett wrote:
>
> Column |
Hi all,
We added an index to a table (to support some different functionality) then
ran into cases where the new index (on month, bl_number in the schema below)
made performance of some existing queries ~20,000 times worse. While we do
have a workaround (using a CTE to force the proper index to b
Sorry meant with 32GB of memory.
Tim
On Tue, Mar 15, 2011 at 2:39 PM, Timothy Garnett wrote:
> Forgot to include our non-default config settings and server info, not that
> it probably makes a difference for this.
>
> from pg_settings:
> name |
, Mar 15, 2011 at 2:23 PM, Timothy Garnett wrote:
> Hi all,
>
> We added an index to a table (to support some different functionality) then
> ran into cases where the new index (on month, bl_number in the schema below)
> made performance of some existing queries ~20,000 times wors
10 matches
Mail list logo