po 18. 1. 2021 v 13:40 odesílatel Amit Langote <amitlangot...@gmail.com>
napsal:

> While discussing the topic of foreign key performance off-list with
> Robert and Corey (also came up briefly on the list recently [1], [2]),
> a few ideas were thrown around to simplify our current system of RI
> checks to enforce foreign keys with the aim of reducing some of its
> overheads.  The two main aspects of  how we do these checks that
> seemingly cause the most overhead are:
>
> * Using row-level triggers that are fired during the modification of
> the referencing and the referenced relations to perform them
>
> * Using plain SQL queries issued over SPI
>
> There is a discussion nearby titled "More efficient RI checks - take
> 2" [2] to address this problem from the viewpoint that it is using
> row-level triggers that causes the most overhead, although there are
> some posts mentioning that SQL-over-SPI is not without blame here.  I
> decided to focus on the latter aspect and tried reimplementing some
> checks such that SPI can be skipped altogether.
>
> I started with the check that's performed when inserting into or
> updating the referencing table to confirm that the new row points to a
> valid row in the referenced relation.  The corresponding SQL is this:
>
> SELECT 1 FROM pk_rel x WHERE x.pkey = $1 FOR KEY SHARE OF x
>
> $1 is the value of the foreign key of the new row.  If the query
> returns a row, all good.  Thanks to SPI, or its use of plan caching,
> the query is re-planned only a handful of times before making a
> generic plan that is then saved and reused, which looks like this:
>
>               QUERY PLAN
> --------------------------------------
>  LockRows
>    ->  Index Scan using pk_pkey on pk x
>          Index Cond: (a = $1)
> (3 rows)
>
>
>

What is performance when the referenced table is small? - a lot of
codebooks are small between 1000 to 10K rows.

Reply via email to