It seems to me one solution is to alter your table topology by
partitioning your table by the keys you need to query on, and then
using simple aggregates.

You;d have to set up ON INSERT DO INSTEAD rules, and you might get a
performance hit.....


Another solution might be to break up the query into several pieces,
and running smaller queries aimed at retrieivng individual rows.
This could be done inside a stored proc. Looking into how we did this
with some queries in LedgerSMB.....

Here's a stored procedure we used in LedgerSMB to pull distinct years
from a table with, maybe 10M rows in a timely fashion.  Something
similar might be doable for you with modifications of course:


CREATE OR REPLACE FUNCTION date_get_all_years() returns setof INT AS
$$
DECLARE next_record int;
BEGIN

SELECT MIN(EXTRACT ('YEAR' FROM transdate))::INT
INTO next_record
FROM acc_trans;

LOOP

  EXIT WHEN next_record IS NULL;
  RETURN NEXT next_record;
  SELECT MIN(EXTRACT ('YEAR' FROM transdate))::INT AS YEAR
  INTO next_record
  FROM acc_trans
  WHERE EXTRACT ('YEAR' FROM transdate) > next_record;


END LOOP;

END;

$$ language plpgsql;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to