help is increasing work_mem --- it looks like you
are using the default 1MB. Cranking that up to a few MB would reduce
the number of hash batches needed.
regards, tom lane
Happy Skiing!
Dieter Rehbein
Software Architect | dieter.rehb...@skiline.cc
Skiline Media GmbH
Lakeside B03
9020 Klagenfurt
t)
Total runtime: 148.068 ms
regards
Dieter
Am 02.04.2013 um 16:55 schrieb Igor Neyman :
From: Dieter Rehbein [mailto:dieter.rehb...@skiline.cc]
Sent: Tuesday, April 02, 2013 4:52 AM
To: pgsql-performance@postgresql.org
Subject: Join between 2 tables always executes a sequential scan on
Hi everybody,
in a project I have a performance problem, which I (and my colleagues) don't
understand. It's a simple join between 2 of 3 tables:
table-1: user (id, user_name, ...). This table has about 1 million rows
(999673 rows)
table-2: competition (57 rows)
table-3: user_2_competi
thank's a lot guys, I will try that out.
regards
Dieter
Am 12.04.2011 um 11:07 schrieb Claudio Freire:
On Tue, Apr 12, 2011 at 10:59 AM, Dieter Rehbein
wrote:
> I just executed a VACUUM ANALYZE and now everything performs well. hm,
> strange.
That probably means you need more
what I did, was an ANALYZE, which did not change anything.
I just executed a VACUUM ANALYZE and now everything performs well. hm, strange.
thanks
Dieter
Am 12.04.2011 um 09:42 schrieb Claudio Freire:
On Tue, Apr 12, 2011 at 7:20 AM, Dieter Rehbein
wrote:
> Hi everybody,
>
>
Hi everybody,
I have a performance-problem with a query using a LIMIT. There are other
threads rergading performance issues with LIMIT, but I didn't find useful hints
for our problem and it might
be interesting for other postgres-users.
There are only 2 simple tables:
CREATE TABLE newsfeed
(