1st suggestion - If there is no reason behind have temp tables rewrite
your sql to eliminate them (this reduces to just 24 queries):
 INSERT INTO week (address, origbytes, destbytes, when)
 SELECT o.origine, sum(o.bytes), sum(d.bytes), '09/02/1999 9:00:01'
   FROM day o, day d 
  WHERE o.origine=d.destinazione AND
    o.origine << '194.74.133.0/24' AND 
    o.when BETWEEN '09/02/1999 9:00:01' AND '09/02/1999 9:59:59' AND
    d.when BETWEEN '09/02/1999 9:00:01' AND '09/02/1999 9:59:59'
  GROUP BY address, 4;
2nd suggestion - Don't delete the temp tables since they will just be
recreated all. Instead do:
 DELETE FROM temporig;
 DELETE FROM temdest;
3rd suggestion - If you are just stubborn or you actually have a reason
for creating and droping those tables all the time then you might want
to vacuum after you drop the tables (I don't even think this will help
[much]).

You might be able to rewrite your loop into a single SQL statement.
Hope this helps,
        -DEJ

> -----Original Message-----
> I have recently installed postgresql 6.4.2 on a Linux 2.2.0 
> box, and I keep
> getting this error message when I queue a a few queries in a 
> Perl script:
> 
> pqReadData() -- backend closed the channel unexpectedly.
> 
> My script has a loop that performs three queries for every 
> hour of a single
> day: the first two queries select records from a master table 
> ("day", see
> below) and insert them into two temporary tables ("tmporig" 
> and "tmpfrom"),
> and the third query selects records out of those two temp tables, and
> inserts them into a fourth table ("week"). for every step of 
> my loop, I
> create and drop the two temp tables. So that means 24 (hours 
> = steps in my
> loop) x 3 queries = 72 queries, * 24 x 1 drop.
> What happens is, the loop never gets past step 11 or 12, and 
> then I get the
> error above.
> 
> Should I insert code in my script to wait for some event from 
> postgresql, or
> what?
> 
> Here are the details on my tables, queries, etc.
> 
> Table    = day
> +----------------------------------+--------------------------
> --------+-----
> --+
> |              Field               |              Type        
>         |
> Length|
> +----------------------------------+--------------------------
> --------+-----
> --+
> | origine                          | inet                     
>         |
> var |
> | destinazione                     | inet                     
>         |
> var |
> | packets                          | int4                     
>         |
> 4 |
> | bytes                            | int4                     
>         |
> 4 |
> | when                             | datetime                 
>         |
> 8 |
> +----------------------------------+--------------------------
> --------+-----
> --+
> 
> Table    = week
> +----------------------------------+--------------------------
> --------+-----
> --+
> |              Field               |              Type        
>         |
> Length|
> +----------------------------------+--------------------------
> --------+-----
> --+
> | address                          | inet                     
>         |
> var |
> | origbytes                        | int4                     
>         |
> 4 |
> | destbytes                        | int4                     
>         |
> 4 |
> | when                             | datetime                 
>         |
> 8 |
> +----------------------------------+--------------------------
> --------+-----
> --+
> 
> - Loop starts here
> 
> - 1st query (the datetime values get changed in the loop, from
> 00:00:00/00:59:59 to 23:00:00/23:59:59):
> select origine as address, sum(bytes) as origbytes into 
> tmporig from day
> where origine << '194.74.133.0/24' and when between 
> '09/02/1999 9:00:01' and
> '09/02/1999 9:59:59' group by address;
> (table tmporig gets created)
> 
> - 2nd query:
>  select destinazione as address, sum(bytes) as destbytes into 
> tmpdest from
> day where destinazione << '194.74.133.0/24' and when between 
> '09/02/1999
> 9:00:01' and '09/02/1999 9:59:59' group by address;
> (table tmpfrom gets created)
> 
> - 3rd query:
> insert into week select o.address,o.origbytes, d.destbytes, 
> '09/02/1999
> 9:00:01' as when from tmporig o, tmpdest d where o.address=d.address;
> 
> - Temp tables get dropped here and the loop continues
> 
> - output from postmaster does not say anything, here are the 
> last lines from
> one of the crashes (notice that postgresql hangs halfway 
> through the loop
> (12:00:00):
> 
> StartTransactionCommand
> query:  select destinazione as address, sum(bytes) as 
> destbytes into tmpdest
> fr
> om day where destinazione << '194.74.133.0/24' and when 
> between '09/02/1999
> 12:
> 00:01' and '09/02/1999 12:59:59' group by address
> ProcessQuery
> CommitTransactionCommand
> LockReleaseAll: lockmethod=1, pid=17290
> LockReleaseAll: reinitializing lockQueue
> LockReleaseAll: done
> StartTransactionCommand
> query:  insert into week select o.address,o.origbytes, d.destbytes,
> '09/02/1999
>  12:00:01' as when from tmporig o, tmpdest d where o.address=d.address
> ProcessQuery
> 
> 
> - output from the script (perl + dbi) from the same crash:
> 
> dbd_st_execute
> dbd_st_execute: statement = > select destinazione as address, 
> sum(bytes) as
> des
> tbytes into tmpdest from day where destinazione << 
> '194.74.133.0/24' and
> when b
> etween '09/02/1999 12:00:01' and '09/02/1999 12:59:59' group 
> by address<
>     <- execute= -1 at ipacct2.pl line 27.
>     -> execute for DBD::Pg::st (DBI::st=HASH(0x8171e48)~0x8171e90
> '09/02/1999 1
> 2:00:01')
> dbd_bind_ph
>          bind :p1 <== '09/02/1999 12:00:01' (type 0)
> dbd_st_rebind
> bind :p1 <== '09/02/1999 12:00:01' (size 19/20/19, ptype 4, 
> otype 1043)
> dbd_st_execute
> dbd_st_execute: statement = > insert into week select 
> o.address,o.origbytes,
> d.
> destbytes, '09/02/1999 12:00:01' as when from tmporig o, 
> tmpdest d where
> o.addr
> ess=d.address<
>     ERROR EVENT 7 'pqReadData() -- backend closed the channel 
> unexpectedly.
>         This probably means the backend terminated abnormally 
> before or
> while p
> rocessing the request.
> 
> 
> Thanks for any help,
> 
> Ludovico
> 
> 

Reply via email to