> > You want to process all invoices to count them and to sum up the
> > amounts on a per month/area/type basis. The initial data size is in
> > GB, but the size of the expected result is in KB (namely 2 data for
> > each 100 areas * 12 months * 4 types).
>
> The key to handling large datasets for
Hello,
> > FOR tuple IN
> > SELECT area, type, month, amount FROM client, invoice WHERE id=client
> > LOOP
> > UPDATE tmp SET amount=amount+tuple.amount, count=count+1
> > WHERE area=tuple.area AND type=tuple.type AND month=tuple.month
> > END LOOP;
> > ...
> >
> > It is very SLOO
> II) SQL
> ---
>
> The first idea is to ask SQL to do the job with a 'group by' clause:
>
> SELECT area, type, month, SUM(amount), COUNT(*)
> FROM client AS c, invoice AS i
> WHERE c.id=i.client
> GROUP BY area, type, month;
>
> As I am just interested in reading the data, without any transact
On Tue, 22 Jul 2003, Fabien COELHO wrote:
> ...
>
> III) PL/pgSQL
> -
>
> Ok, if postgresql does not want to do it my way, let us make it do it.
> Thus I wrote some PL/pgSQL function for my purpose, something like:
>
> CREATE TEMPORARY TABLE tmp (
> area INTEGER,
> type INTEGER,
> You want to process all invoices to count them
> and to sum up the amounts on a per month/area/type
> basis. The initial data size is in GB, but the
> size of the expected result is in KB (namely 2 data
> for each 100 areas * 12 months * 4 types).
The key to handling large datasets for data min
On Tue, Jul 22, 2003 at 18:39:33 +0200,
Fabien COELHO <[EMAIL PROTECTED]> wrote:
>
> As the resulting table is very small, I wish the optimizer would have
> skipped the sort phase, so as to aggregate the data as they come after the
> join. All may be done on the fly without much additionnal stor