I wrote: > * It's kind of sad that we have to add yet another field to > AfterTriggerSharedData, especially since this one will cost 8 bytes > thanks to alignment considerations. I'm not sure if there's another > way, but it seems like ats_relid, ats_rolid, and ats_modifiedcols > are all going to be extremely redundant in typical scenarios. > Maybe it's worth rethinking that data structure a bit? Or maybe > I'm worried over nothing; perhaps normal situations have only a few > AfterTriggerSharedDatas anyway. It might be worth trying to gather > some statistics about that.
I poked at this a little bit by adding some debug printout and running check-world. Since most of our test queries are designed to not run too long, that doesn't give a representative impression of how many trigger events get queued per query, but I think it's probably fair as a sample of the number of distinct triggers that might be involved in one query. Here's what I got for just the core regression tests (the check-world numbers are bigger but similar): count event 1079 new entry added to 0 existing entries 342 new entry added to 1 existing entries 43 new entry added to 2 existing entries 21 new entry added to 3 existing entries 13 new entry added to 4 existing entries 10 new entry added to 5 existing entries 7 new entry added to 6 existing entries 5 new entry added to 7 existing entries 3 new entry added to 8 existing entries 2 new entry added to 9 existing entries 162 match after comparing 1 of 1 entries 30 match after comparing 1 of 2 entries 1 match after comparing 1 of 3 entries 38 match after comparing 2 of 2 entries 1 match after comparing 2 of 3 entries 1 match after comparing 2 of 4 entries 2 match after comparing 3 of 3 entries 1 match after comparing 3 of 4 entries 1 match after comparing 3 of 5 entries 1 match after comparing 4 of 4 entries 1 match after comparing 8 of 8 entries So there are no cases with more than 10 AfterTriggerSharedDatas, and my worry about how big those are is overblown. However ... I noticed from the above that we seem to be hitting the worst-case match result (matching the last entry to check) a lot, and I realized that afterTriggerAddEvent is searching the array backward! It goes from oldest AfterTriggerSharedData to newest. But it really ought to do the reverse, because the newest entry is probably from the current query while the oldest ones might well be from prior queries in the same transaction, which will never match anymore. After applying the attached quickie patch, I get count event 1079 new entry added to 0 existing entries 342 new entry added to 1 existing entries 43 new entry added to 2 existing entries 21 new entry added to 3 existing entries 13 new entry added to 4 existing entries 10 new entry added to 5 existing entries 7 new entry added to 6 existing entries 5 new entry added to 7 existing entries 3 new entry added to 8 existing entries 2 new entry added to 9 existing entries 162 match after comparing 1 of 1 entries 38 match after comparing 1 of 2 entries 2 match after comparing 1 of 3 entries 1 match after comparing 1 of 4 entries 1 match after comparing 1 of 8 entries 30 match after comparing 2 of 2 entries 1 match after comparing 2 of 3 entries 1 match after comparing 2 of 4 entries 1 match after comparing 3 of 3 entries 1 match after comparing 3 of 4 entries 1 match after comparing 3 of 5 entries which seems less inefficient. As said, you would never notice this effect in short queries, but for queries that queue a lot of events it could be important. regards, tom lane
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c index 1fa63ab7d0..711c09a103 100644 --- a/src/backend/commands/trigger.c +++ b/src/backend/commands/trigger.c @@ -4104,11 +4104,12 @@ afterTriggerAddEvent(AfterTriggerEventList *events, /* * Try to locate a matching shared-data record already in the chunk. If - * none, make a new one. + * none, make a new one. The search begins with the most recently added + * record, since newer ones are most likely to match. */ - for (newshared = ((AfterTriggerShared) chunk->endptr) - 1; - (char *) newshared >= chunk->endfree; - newshared--) + for (newshared = (AfterTriggerShared) chunk->endfree; + (char *) newshared < chunk->endptr; + newshared++) { /* compare fields roughly by probability of them being different */ if (newshared->ats_tgoid == evtshared->ats_tgoid && @@ -4120,8 +4121,9 @@ afterTriggerAddEvent(AfterTriggerEventList *events, evtshared->ats_modifiedcols)) break; } - if ((char *) newshared < chunk->endfree) + if ((char *) newshared >= chunk->endptr) { + newshared = ((AfterTriggerShared) chunk->endfree) - 1; *newshared = *evtshared; /* now we must make a suitably-long-lived copy of the bitmap */ newshared->ats_modifiedcols = afterTriggerCopyBitmap(evtshared->ats_modifiedcols);