This surprises me. There's a moderate amount of overhead involved in
a plpgsql trigger, but I'd not have thought it would swamp the added
inefficiencies involved in a rule. Notice that you're getting a double
indexscan in the rule case --- that takes more time to plan, and more
time to execute (observe the nearly double actual time for the top level
plan node).
What were you averaging here --- just the "total runtime" reported by EXPLAIN ANALYZE?
yes.
It would be interesting to factor in the planning time too. Could you retry this and measure the total elapsed time? (psql's \timing command will help.)
\timing is cool! never knew about it until just now.
test=# \timing Timing is on. test=# update foo_view set id = 1 where id = 1; For 10 executions, the average is about 1.487ms
test=# update foo2 set id = 1 where id = 1; For 10 executions, the average is about 1.420ms
so yeah, yer right, the view/rule is a bit slower.
I'm going to start using \timing for here on out...
<short pause>
Okay, so now I created two prepared statements:
prepare foo_view_statement (int, int) as update foo_view set id=$1 where id = $2;
prepare foo2_statement (int, int) as update foo2 set id=$1 where id = $2;
execute foo_view_statement(1, 1); average timing: 1.137
execute foo2_statement(1, 1); average timing: 1.359;
So it seems if the plan is already made, the update against the rule is actually a tad faster. I don't know if the difference in speed is enough to convince one (myself included) to start using prepared statements, but it's another data point.
But still, a real-world example might prove all of this wrong.
eric
---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?
http://archives.postgresql.org