Greetings,

  Subject pretty much says it all.  I've put up with this error in the
  past when it has caused me trouble but it's now starting to hit our
  clients on occation which is just unacceptable.

  The way I've seen it happen, and this is just empirically so I'm not
  sure that it's exactly right, is something like this:

  Running with pg_autovacuum on the system
  Run a long-running PL/PgSQL function which creates tables
  Wait for some sort of overlap, and the PL/PgSQL function dies with the
  above error.

  I've also seen it happen when I've got a long-running PL/PgSQL
  function going and I'm creating tables in another back-end.

  From a prior discussion I *think* the issue is the lack of
  versioning/visibility information in the SysCache which means that if
  the long-running function attempts to look-up data about a table which
  was created *after* the long-running function started but was put into
  the common SysCache by another backend, the long-running function gets
  screwed by the 'tuple concurrently updated' query and ends up failing
  and being rolled back.

  If this is correct then the solution seems to be either add versioning
  to the SysCache data, or have an overall 'this SysCache is only good
  for data past transaction X' so that a backend which is prior to that
  version could just accept that it can't use the SysCache and fall back
  to accessing the data directly (if that's possible).  I'm not very
  familiar with the way the SysCache system is envisioned but I'm not a
  terrible programmer (imv anyway) and given some direction on the
  correct approach to solving this problem I'd be happy to spend some
  time working on it.  I'd *really* like to see this error just go away
  completely for all non-broken use-cases.

        Thanks,

                Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to