On Tue, Mar 6, 2018 at 4:53 AM, Andres Freund <and...@anarazel.de> wrote:
> Hi,
>
>> diff --git a/src/backend/executor/nodeLockRows.c 
>> b/src/backend/executor/nodeLockRows.c
>> index 7961b4be6a..b07b7092de 100644
>> --- a/src/backend/executor/nodeLockRows.c
>> +++ b/src/backend/executor/nodeLockRows.c
>> @@ -218,6 +218,11 @@ lnext:
>>                                       ereport(ERROR,
>>                                                       
>> (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),
>>                                                        errmsg("could not 
>> serialize access due to concurrent update")));
>> +                             if 
>> (!BlockNumberIsValid(BlockIdGetBlockNumber(&((hufd.ctid).ip_blkid))))
>> +                                     ereport(ERROR,
>> +                                                     
>> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
>> +                                                      errmsg("tuple to be 
>> locked was already moved to another partition due to concurrent update")));
>> +
>
> Why are we using ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE rather than
> ERRCODE_T_R_SERIALIZATION_FAILURE?  A lot of frameworks have builtin
> logic to retry serialization failures, and this kind of thing is going
> to resolved by retrying, no?
>

I think it depends, in some cases retry can help in deleting the
required tuple, but in other cases like when the user tries to perform
delete on a particular partition table, it won't be successful as the
tuple would have been moved.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Reply via email to