On May 28, 2008, at 9:27 PM, edfialk wrote:

example rows from huge table (almost 900,000 rows):
fips, pollutant, value

That's not really huge in comparison with other pgsql databases. There are terabyte installations out there ;)

SELECT small.fips, small.geom, small.name, SUM(huge.value) from small
JOIN huge on huge.fips = small.fips WHERE (SUM(huge.value)) > 500 AND
huge.pollutant='co';

wonder if that makes sense.  Obviously, can't have an aggregate in
where clause, so I've tried a couple WHERE (SELECT) kind of things,
nothing working out too well.

So first, if anyone has any idea on the best way I can do a WHERE
(sum(huge.value) > 500)
or...
any ideas on how I could speed up the query, I would be so extremely
grateful.

That's what GROUP BY and HAVING are for:

SELECT fips, small.geom, small.name, SUM(huge.value)
FROM small
JOIN huge USING (fips)
WHERE huge.pollutant='co'
GROUP BY fips, small.geom, small.name
HAVING SUM(huge.value) > 500;

Guessing from your performance problem you may not have an index on huge.fips? And did you vacuum/analyse those tables anytime recently?

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,483e4a67927663141972859!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to