Re: [PERFORM] pgmemcache

2006-04-17 Thread Christian Storm
Not sure if I follow why this is a problem.  Seems like it would be  
beneficial to have both BEFORE and AFTER COMMIT triggers.
With the BEFORE COMMIT trigger you would have the ability to 'un- 
commit' (rollback) the transaction.  With
the AFTER COMMIT trigger you wouldn't have that option because the  
commit has already been successful.  However,
with an AFTER COMMIT you would be able to trigger other downstream  
events that rely on a transaction successfully committing.
If the trigger fails it is the triggers  problem, it isn't the  
commit's problem, i.e., you wouldn't want to 'un-commit'.  If the  
trigger

gets an error it has to gracefully deal with that error programatically.

Where have I gone astray with this logic?

On Apr 12, 2006, at 5:35 PM, Tom Lane wrote:


"Jim C. Nasby" <[EMAIL PROTECTED]> writes:

Why are AFTER COMMIT triggers impossible?


What happens if such a trigger gets an error?  You can't un-commit.

regards, tom lane



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[PERFORM] index is not used if I include a function that returns current time in my query

2006-04-17 Thread Cris Carampa

Hello, postgresql 7.4.8 on SuSE Linux here.

I have a table called DMO with a column called ORA_RIF defined as 
"timestamp without time zone" ;


I created an index on this table based on this column only.

If I run a query against a text literal the index is used:

> explain select * from dmo where ora_rif>'2006-01-01';
  QUERY PLAN
-
 Index Scan using dmo_ndx02 on dmo  (cost=0.00..1183.23 rows=736 width=156)
   Index Cond: (ora_rif > '2006-01-01 00:00:00'::timestamp without time 
zone)


If I try to use a function that returns the current time instead, a 
sequential scan is always performed:


> explain select * from dmo where ora_rif>localtimestamp;
  QUERY PLAN
--
 Seq Scan on dmo  (cost=0.00..1008253.22 rows=2703928 width=156)
   Filter: (ora_rif > ('now'::text)::timestamp(6) without time zone)

> explain select * from dmo where ora_rif>localtimestamp::timestamp 
without time zone;

  QUERY PLAN
--
 Seq Scan on dmo  (cost=0.00..1008253.22 rows=2703928 width=156)
   Filter: (ora_rif > ('now'::text)::timestamp(6) without time zone)

... etc. ...

(tried with all datetime functions with and without cast)

I even tried to write a function that explicitly returns a "timestamp 
without time zone" value:


create or replace function f () returns timestamp without time zone
as '
declare
  x timestamp without time zone ;
begin
  x := ''2006-01-01 00:00:00'';
  return x ;
end ;
' language plpgsql ;

But the result is the same:

> explain select * from dmo ora_rif>f();
 QUERY PLAN
-
 Seq Scan on dmo  (cost=0.00..987973.76 rows=2703928 width=156)
   Filter: (ora_rif > f())

Any suggestion?

Kind regards,

--
Cris Carampa (spamto:[EMAIL PROTECTED])

potevo chiedere come si chiama il vostro cane
il mio è un po' di tempo che si chiama Libero



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] pgmemcache

2006-04-17 Thread Christian Storm

I'll let you in on a secret: NOTIFY is actually a before-commit
operation.  This is good enough because it never, or hardly ever,
fails.  I would argue that anything you want to do in an AFTER COMMIT
trigger could just as well be done in a BEFORE COMMIT trigger; if  
that's

not reliable enough then you need to fix your trigger.


So, looping back to pgmemcache.  It doesn't seem like it's about the  
trigger failing; it's about
the transaction failing.  How could I  'fix my trigger' so that if  
the transaction fails

the cache wouldn't be updated?


An AFTER COMMIT trigger would have to be in a separate transaction.


Agreed.  Each trigger being in its own transaction.


What happens if there's more than one, and one of them fails?


Each AFTER COMMIT trigger would be in its own transaction.  So if it
fails, let it fail.  It isn't the original transactions fault.  If  
you want it to bundled

with the original transaction then do a BEFORE COMMIT.

If there is more than one, so be it.  One may fail, another may not.
If that is not okay or the intended behavior, then create a new  
trigger than

combines them into one transaction so that they are coupled.


Even
more to the point, if it's a separate transaction, don't you have
to fire all these triggers again when you commit that transaction?
The idea seems circular.


Why would it have to do this?  Not sure if I follow.  In my mind,
the only way this would happen is if the AFTER COMMIT trigger acted
upon the very same table it is triggered off of.  Then you could
have a circular reference.  However, this is true for the BEFORE COMMIT
situation too.  I don't see the difference.  What am I missing?


Cheers,

Christian


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[PERFORM] slow cursor

2006-04-17 Thread Sriram Dandapani








Hi

 

I have a cursor that fetches 150K rows and updates or
inserts a table with 150K rows.

 

It takes several minutes for the process to complete (about
15 minutes). The select by itself (without cursor) gets all rows in 15 seconds.

 

Is there a way to optimize the cursor to fetch all records
and speed up the process. I still need to do the record by record processing

 








[PERFORM] creating of temporary table takes very long

2006-04-17 Thread Sriram Dandapani








create temporary table c_chkpfw_hr_tr_updates as

    select * from c_chkpfw_hr_tr a

    where exists(select 1 from
chkpfw_tr_hr_dimension b

    WHERE a.firstoccurrence
= b.firstoccurrence

     AND a.sentryid_id =
b.sentryid_id

     AND a.node_id =
b.node_id       

     AND a.customerid_id =
b.customerid_id

     AND
coalesce(a.interface_id,0) = coalesce(b.interface_id,0)

     AND
coalesce(a.source_id,0) = coalesce(b.source_id,0)

     AND
coalesce(a.destination_id,0) = coalesce(b.destination_id,0)

     AND coalesce(a.sourceport_id,0)
= coalesce(b.sourceport_id,0)

     AND
coalesce(a.destinationport_id,0) = coalesce(b.destinationport_id,0)

     AND
coalesce(a.inoutbound_id,0) = coalesce(b.inoutbound_id,0)

     AND
coalesce(a.action_id,0) = coalesce(b.action_id,0)

     AND coalesce(a.protocol_id,0)
= coalesce(b.protocol_id,0)

     AND
coalesce(a.service_id,0) = coalesce(b.service_id,0)

     AND
coalesce(a.sourcezone_id,0) = coalesce(b.sourcezone_id,0)

     AND
coalesce(a.destinationzone_id,0) = coalesce(b.destinationzone_id,0));

 

This takes forever (I have to cancel the statement each
time)

 

c_chkpfw_hr_tr has about 2 rows

chkpfw_tr_hr_dimension has 150K rows

 

c_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension

 

For such a small data set, this seems like a mystery. The
only other alternative I have is to use cursors which are also very slow for
row sets of 10- 15K or more.








Re: [PERFORM] creating of temporary table takes very long

2006-04-17 Thread Sriram Dandapani








Explain analyze on the select statement
that is the basis for temp table data takes forever. I turned off enable_seqscan
but it did not have an effect

 









From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Sriram Dandapani
Sent: Monday, April 17, 2006 11:37
AM
To: Pgsql-Performance (E-mail)
Subject: [PERFORM] creating of
temporary table takes very long



 

create temporary table c_chkpfw_hr_tr_updates as

   
select * from c_chkpfw_hr_tr a

   
where exists(select 1 from chkpfw_tr_hr_dimension b

   
WHERE a.firstoccurrence = b.firstoccurrence

   
 AND a.sentryid_id = b.sentryid_id

   
 AND a.node_id = b.node_id   
   

   
 AND a.customerid_id = b.customerid_id

   
 AND coalesce(a.interface_id,0) = coalesce(b.interface_id,0)

   
 AND coalesce(a.source_id,0) = coalesce(b.source_id,0)

   
 AND coalesce(a.destination_id,0) = coalesce(b.destination_id,0)

   
 AND coalesce(a.sourceport_id,0) = coalesce(b.sourceport_id,0)

   
 AND coalesce(a.destinationport_id,0) = coalesce(b.destinationport_id,0)

   
 AND coalesce(a.inoutbound_id,0) = coalesce(b.inoutbound_id,0)

   
 AND coalesce(a.action_id,0) = coalesce(b.action_id,0)

   
 AND coalesce(a.protocol_id,0) = coalesce(b.protocol_id,0)

   
 AND coalesce(a.service_id,0) = coalesce(b.service_id,0)

   
 AND coalesce(a.sourcezone_id,0) = coalesce(b.sourcezone_id,0)

   
 AND coalesce(a.destinationzone_id,0) = coalesce(b.destinationzone_id,0));

 

This takes forever (I have to cancel the statement each
time)

 

c_chkpfw_hr_tr has about 2 rows

chkpfw_tr_hr_dimension has 150K rows

 

c_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension

 

For such a small data set, this seems like a mystery. The
only other alternative I have is to use cursors which are also very slow for
row sets of 10- 15K or more.








Re: [PERFORM] creating of temporary table takes very long

2006-04-17 Thread Tom Lane
"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> [ query snipped ]
> This takes forever (I have to cancel the statement each time)

How long did you wait?

> c_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension

Which would be what exactly?  What does EXPLAIN show for that SELECT?
(I won't make you post EXPLAIN ANALYZE, if you haven't got the patience
to let it finish, but you should at least provide EXPLAIN results.)

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] creating of temporary table takes very long

2006-04-17 Thread Sriram Dandapani
Explain output. I tried explain analyze but pgadmin froze after 10
minutes.


QUERY PLAN
"Seq Scan on c_chkpfw_hr_tr a  (cost=0.00..225975659.89 rows=11000
width=136)"
"  Filter: (subplan)"
"  SubPlan"
"->  Bitmap Heap Scan on chkpfw_tr_hr_dimension b
(cost=1474.64..10271.13 rows=1 width=0)"
"  Recheck Cond: (($0 = firstoccurrence) AND ($1 = sentryid_id)
AND ($2 = node_id))"
"  Filter: (($3 = customerid_id) AND (COALESCE($4, 0) =
COALESCE(interface_id, 0)) AND (COALESCE($5, 0) = COALESCE(source_id,
0)) AND (COALESCE($6, 0) = COALESCE(destination_id, 0)) AND
(COALESCE($7, 0) = COALESCE(sourceport_id, 0)) AND (COALESCE($8, 0) =
COALESCE(destinationport_id, 0)) AND (COALESCE($9, 0) =
COALESCE(inoutbound_id, 0)) AND (COALESCE($10, 0) = COALESCE(action_id,
0)) AND (COALESCE($11, 0) = COALESCE(protocol_id, 0)) AND (COALESCE($12,
0) = COALESCE(service_id, 0)) AND (COALESCE($13, 0) =
COALESCE(sourcezone_id, 0)) AND (COALESCE($14, 0) =
COALESCE(destinationzone_id, 0)))"
"  ->  Bitmap Index Scan on chkpfw_tr_hr_idx1
(cost=0.00..1474.64 rows=38663 width=0)"
"Index Cond: (($0 = firstoccurrence) AND ($1 =
sentryid_id) AND ($2 = node_id))"

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 17, 2006 12:29 PM
To: Sriram Dandapani
Cc: Pgsql-Performance (E-mail)
Subject: Re: [PERFORM] creating of temporary table takes very long 

"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> [ query snipped ]
> This takes forever (I have to cancel the statement each time)

How long did you wait?

> c_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension

Which would be what exactly?  What does EXPLAIN show for that SELECT?
(I won't make you post EXPLAIN ANALYZE, if you haven't got the patience
to let it finish, but you should at least provide EXPLAIN results.)

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] creating of temporary table takes very long

2006-04-17 Thread Sriram Dandapani
Got an explain analyze output..Here it is


"Seq Scan on c_chkpfw_hr_tr a  (cost=0.00..225975659.89 rows=11000
width=136) (actual time=2.345..648070.474 rows=22001 loops=1)"
"  Filter: (subplan)"
"  SubPlan"
"->  Bitmap Heap Scan on chkpfw_tr_hr_dimension b
(cost=1474.64..10271.13 rows=1 width=0) (actual time=29.439..29.439
rows=1 loops=22001)"
"  Recheck Cond: (($0 = firstoccurrence) AND ($1 = sentryid_id)
AND ($2 = node_id))"
"  Filter: (($3 = customerid_id) AND (COALESCE($4, 0) =
COALESCE(interface_id, 0)) AND (COALESCE($5, 0) = COALESCE(source_id,
0)) AND (COALESCE($6, 0) = COALESCE(destination_id, 0)) AND
(COALESCE($7, 0) = COALESCE(sourceport_id, 0)) AND (COALESCE($8 (..)"
"  ->  Bitmap Index Scan on chkpfw_tr_hr_idx1
(cost=0.00..1474.64 rows=38663 width=0) (actual time=12.144..12.144
rows=33026 loops=22001)"
"Index Cond: (($0 = firstoccurrence) AND ($1 =
sentryid_id) AND ($2 = node_id))"
"Total runtime: 648097.800 ms"

Regards

Sriram

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 17, 2006 12:29 PM
To: Sriram Dandapani
Cc: Pgsql-Performance (E-mail)
Subject: Re: [PERFORM] creating of temporary table takes very long 

"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> [ query snipped ]
> This takes forever (I have to cancel the statement each time)

How long did you wait?

> c_chkpfw_hr_tr has same indexes as chkpfw_tr_hr_dimension

Which would be what exactly?  What does EXPLAIN show for that SELECT?
(I won't make you post EXPLAIN ANALYZE, if you haven't got the patience
to let it finish, but you should at least provide EXPLAIN results.)

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] slow cursor

2006-04-17 Thread Luckys
This is one thing that I always try to avoid, a single INSERT INTO...SELECT ...FROM or single UPDATE is always faster compared to looping the same within a cursor, unless its inevitable.
 
regards,
Luckys. 
On 4/17/06, Sriram Dandapani <[EMAIL PROTECTED]> wrote:



Hi
 
I have a cursor that fetches 150K rows and updates or inserts a table with 150K rows.
 
It takes several minutes for the process to complete (about 15 minutes). The select by itself (without cursor) gets all rows in 15 seconds.

 
Is there a way to optimize the cursor to fetch all records and speed up the process. I still need to do the record by record processing