On 1/31/19 10:31 PM, Robert Haas wrote:
> On Sun, Jan 27, 2019 at 8:26 PM David Rowley
> <david.row...@2ndquadrant.com> wrote:
>> One way around this would be to always perform an invalidation on the
>> partition's parent when performing a relcache invalidation on the
>> partition. We could perhaps invalidate all the way up to the top
>> level partitioned table, that way we could just obtain a lock on the
>> target partitioned table during AcquireExecutorLocks(). I'm currently
>> only setting the delaylock flag to false for leaf partitions only.
>
> Would this problem go away if we adopted the proposal discussed in
> http://postgr.es/m/24823.1544220...@sss.pgh.pa.us and, if so, is that
> a good fix?
>
> I don't quite understand why this is happening. It seems like as long
> as you take at least one new lock, you'll process *every* pending
> invalidation message, and that should trigger replanning as long as
> the dependencies are correct. But maybe the issue is that you hold
> all the other locks you need already, and the lock on the partition at
> issue is only acquired during execution, at which point it's too late
> to replan. If so, then I think something along the lines of the above
> might make a lot of sense.
>
It happens because ConditionalLockRelation does this:
if (res != LOCKACQUIRE_ALREADY_CLEAR)
{
AcceptInvalidationMessages();
MarkLockClear(locallock);
}
and with the prepared statement already planned, we end up skipping that
because res == LOCKACQUIRE_ALREADY_CLEAR. Which happens because
GrantLockLocal (called from LockAcquireExtended) find the relation in as
already locked.
I don't know if this is correct or not, though.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services