While fiddling with FK tuning, Noah suggested batching trigger executions together to avoid execution overhead.
It turns out there is no easy way to write triggers that can take advantage of the knowledge that they are being executed as a set of trigger executions. Some API is required to allow a trigger to understand that there may be other related trigger executions in the very near future, so it can attempt to amortise call overhead across many invocations ("batching"). The attached patch adds two fields to the TriggerDesc trigger functions are handed, allowing them to inspect (if they choose) the additional fields and thus potentially use some form of batching. This is backwards compatible with earlier trigger API. Two fields are int tg_tot_num_events; int tg_event_num So your trigger can work out it is e.g. number 3 of 56 invocations in the current set of after triggers. Going back to Noah's example, this would allow you to collect all 56 values and then execute a single statement with an array of 56 values in it. Knowing there are 56 means you can wait until the 56th invocation before executing the batched statement, without risking skipping some checks because you've only got half a batch left. If you don't do this, then you'd need to introduce the concept of a "final function" similar to the way aggregates work. But that seems much too complex to be real world useful. This seemed a generally useful approach for any after trigger author, not just for RI. Comments please. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
batch_api_after_triggers.v1.patch
Description: Binary data
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers